× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: MPIO - slow speed

WingDog
Guide

MPIO - slow speed

Hello!

I have two ReadyNAS6 Pro boxes with the following configs:

1 box

RDN6 pro/E7300/4gb/6*2Tb WD CB/4.2.27 FW

2 box

RDN6 pro/E5300/2gb/6*2Tb WD CB/6.3.5 FW

 

Other equipment:

Juniper EX3300 switch and Dell R510 server

R510 is Hyper-V 2012 R2 server for lightweight load  with MPIO enabled.

I'm using SQLio to check iSCSI storage speed and latency.

while any tests I got only ~110MB/sec storage speed with 2*500mbps load at dedicated iSCSI NIC (taskmgr) with any NAS.

mpclaim show LB policy as RRWS (it seems to be correct, but RR only is unsupported??!) with two active paths, so MPIO configured correctly.

But PRTG monitor shows ~92-95% load at first NIC and ~10-15% at second NAS NIC, so why I guess there is some MPIO error with NAS config.

 

So, the question is: how to achive 2*1gbps speed using MPIO with ReadyNAS?

 

Message 1 of 31
BrianL2
NETGEAR Employee Retired

Re: MPIO - slow speed

Hi Wingdog,

 

How many active NICs are you using in your ReadyNAS system?

 

 

Kind regards,

 

BrianL

NETGEAR Community

Message 2 of 31
WingDog
Guide

Re: MPIO - slow speed

Hi BrianL.

Two NICs obviously.

 

Message 3 of 31
mdgm-ntgr
NETGEAR Employee Retired

Re: MPIO - slow speed

There are some good tips re MPIO here in the "Configure multiple MPIO sessions to the Target" section

With MPIO the key is have the 2 nics on separate subnets. Otherwise you cannot be sure that layer 2 gets it right

Message 4 of 31
WingDog
Guide

Re: MPIO - slow speed


@mdgm wrote:

There are some good tips re MPIO here in the "Configure multiple MPIO sessions to the Target" section

With MPIO the key is have the 2 nics on separate subnets. Otherwise you cannot be sure that layer 2 gets it right


it's something new about separate subnets for MPIO 😃

I'll check it out.

 

 

Message 5 of 31
WingDog
Guide

Re: MPIO - slow speed

Hello, mdgm.

Thanks for "the secret key" with two subnets - I've achived ~2*890mbps at READ with RND MPIO. Not so good as other vendors, but acceptable.

I think it will be better to add more MPIO specific information to ReadyNAS OS6 manual (and about supported LB policies too).

 

PS C:\> C:\SQLIO\sqlio.exe -s900 -kR -frandom -b8 -t8 -o16 -LS -BN D:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2474044 counts per second
8 threads reading for 900 secs from file D:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 1048576 MB for file: D:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 25979.92
MBs/sec:   202.96
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 4
Max_Latency(ms): 75
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 10  9 11 12 11 13 11  9  9  2  1  1  1  1  1  0  0  0  0  0  0  0  0  0  0

but what's the mess with WRITE??!!

PS C:\> C:\SQLIO\sqlio.exe -s900 -kW -frandom -b8 -t8 -o16 -LS -BN D:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2474044 counts per second
8 threads writing for 900 secs to file D:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 1048576 MB for file: D:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:    21.26
MBs/sec:     0.16
latency metrics:
Min_Latency(ms): 268
Avg_Latency(ms): 6005
Max_Latency(ms): 13647
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100
PS C:\>

that is unacceptable!

 

 

 

Message 6 of 31
MarcusF
Admin

Re: MPIO - slow speed

All MPIO performance is relate dto the client side not the readnas. to see good speed the client must plance the iscsi commands correctly.

The readynas just reponds to the reponse on whichver interface it sees. That is why 2 subnets important because otherwise the readynas will stick to one of the interfaces to send all traffic and so you get no ballancing

 

But now that you have subnets considered to ReadyNAs there is no extra configuration needed for MPIO. all decison over which interface to send the iscsi commands are on the client side.

 

I hope this makes sense

I do agreee we could do with some doicumentation in this area but it is not a common configurtation request we have sene but seems it is becoming more popopular.

 

On the read speed I would think at this point it is limited by interface speed and disk / volume maximum performane not really a readynas specific thing but I could be wrong. So Raid setup and disk performnce should be the limiting factors.

 

 

Message 7 of 31
WingDog
Guide

Re: MPIO - slow speed

All MPIO performance is relate dto the client side not the readnas. to see good speed the client must plance the iscsi commands correctly.

The readynas just reponds to the reponse on whichver interface it sees. That is why 2 subnets important because otherwise the readynas will stick to one of the interfaces to send all traffic and so you get no ballancing

That is not complete correct because if I'm using Windows Server as iSCSI target I can use any subnets, including one subnet for all traffic even with 8 NICs both sides.

 

 

I do agreee we could do with some doicumentation in this area but it is not a common configurtation request we have sene but seems it is becoming more popopular.

Unfortunally I also have 4220 box. it's freesing, pausing, lagging and It's absolutly equal to this RND6 pro by FW, but have powerful hardware.  So if NTGR presents 4220 as Enterprise (however RND6 pro is MID-Ent for NTGR) manuals MUST have enough explanation of devices logic. And if you presents iSCSI and MPIO - it's must work without some strange config particularly if there is no Vlan Tags and "management" interfaces (or OOB).

 

On the read speed I would think at this point it is limited by interface speed and disk / volume maximum performane not really a readynas specific thing but I could be wrong. So Raid setup and disk performnce should be the limiting factors.

This is not correct at all. it's nonsense.

there is no any way to get 20 IOPS and 0MB/sec from one SATA drive.

this is software bug

Message 8 of 31
WingDog
Guide

Re: MPIO - slow speed

here is results of the same test at one sata drive (it's second drive at my work PC)

PS C:\Windows\system32> C:\SQLIO\sqlio.exe -s90 -kW -frandom -b8 -t8 -o16 -LS -BN D:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2728190 counts per second
8 threads writing for 90 secs to file D:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 1048576 MB for file: D:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   135.66
MBs/sec:     1.05
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 939
Max_Latency(ms): 1828
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 99

135 IOPS - that is good for one sata drive with such load

Message 9 of 31
MarcusF
Admin

Re: MPIO - slow speed

I guess you must be running OS6 on the rnd6 because by default they have different OS than RN4220.

I think the 4220 feezing, lagging pausing is a bgger concern. DO you have support case for that becuase there is some many things to discuss there hard to do here.

 

What happens when no MPIO configured and using single network link? I assume it is all ok.

There is no special MPIO configuration needed on ReadyNAS. To ReadyNAS we are just recieving and transmitting using ISCSI protocol / commands. Just happens with MPIO those are recieved via seperate nics.

 

From microsoft

"It is not necessary to have multiple subnets for iSCSI multi-pathing, but it's highly recommended, you can guarantee the paths it's going to use. You can use Multipath I/O (MPIO) on iSCSI connection, to deliver a high quality and reliable storage service with failover and load balancing capability."

So maybe we are both right but for me I have always tested anything with MPIO with seperate subnets. Vmware or Windows hypervisors

 

Of course I can never say we have no bugs but I can say we don't have an open one for MPIO at this time so we are happy to help if can.

Message 10 of 31
WingDog
Guide

Re: MPIO - slow speed

I guess you must be running OS6 on the rnd6 because by default they have different OS than RN4220.

RND6 - 6.3.5 (thanks mdgm), RN4220 - 6.2.4 (latest for it)

I think the 4220 feezing, lagging pausing is a bgger concern. DO you have support case for that becuase there is some many things to discuss there hard to do here.

I HAD support case and while solving it I was need to reformat all 12*4Tb partition. funny?

thanks "Mateusz Janowicz NETGEAR Level 3 Technical Support Engineer" he was able to solve random halting RN4220. now it's only freesing.

 

What happens when no MPIO configured and using single network link? I assume it is all ok.

maybe, but ~100MB/sec is not enough.

 

From microsoft

"It is not necessary to have multiple subnets for iSCSI multi-pathing, but it's highly recommended, you can guarantee the paths it's going to use. You can use Multipath I/O (MPIO) on iSCSI connection, to deliver a high quality and reliable storage service with failover and load balancing capability."

So maybe we are both right but for me I have always tested anything with MPIO with seperate subnets. Vmware or Windows hypervisors

OK, now I have separate subnet. Also I've reformatted X-RAID to RAID0 (6*2Tb SATA drives).

here is results:

PS C:\> C:\SQLIO\sqlio.exe -s90 -kW -frandom -b8 -t8 -o16 -LS -BN D:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2474044 counts per second
8 threads writing for 90 secs to file D:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 1048576 MB for file: D:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:    43.59
MBs/sec:     0.34
latency metrics:
Min_Latency(ms): 750
Avg_Latency(ms): 2920
Max_Latency(ms): 6315
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100

as you can see, even with RAID0 (!!!!) write speed and latency is unacceptable.

 

 

read speed equal to X-RAID (raid5) and limited to 2*1gbe.

PS C:\> C:\SQLIO\sqlio.exe -s90 -kR -frandom -b8 -t8 -o16 -LS -BN D:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2474044 counts per second
8 threads reading for 90 secs from file D:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 1048576 MB for file: D:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 25469.41
MBs/sec:   198.97
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 4
Max_Latency(ms): 63
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 12  8  9 12 10 12 10  9 10  2  2  1  1  1  1  0  0  0  0  0  0  0  0  0  0

Of course I can never say we have no bugs but I can say we don't have an open one for MPIO at this time so we are happy to help if can.

Hoping for you, because after the third online chat escalltion I lose heart

Message 11 of 31
WingDog
Guide

Re: MPIO - slow speed

iSCSI without MPIO (1 NIC)

PS C:\Windows\system32> C:\SQLIO\sqlio.exe -s90 -kW -frandom -b8 -t8 -o16 -LS -BN o:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2728190 counts per second
8 threads writing for 90 secs to file o:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 409600 MB for file: o:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:    19.60
MBs/sec:     0.15
latency metrics:
Min_Latency(ms): 3217
Avg_Latency(ms): 6452
Max_Latency(ms): 7558
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100
Message 12 of 31
MarcusF
Admin

Re: MPIO - slow speed

Absolutley your results are not good howvere so no arguement there.

 

It wiill be hard to debug this here becaue I don't have an easy answer or configuartion change to make so will take a bit more back and forward.

If it is a software issue we need to have support case so can log issue.

Unfortunately we will not be able to log bug against a Pro6 running OS 6. So would need the data from the rn4220 if get that far to say is bug.

For now I would like to see reults with no MPIO and just single NIC so can start with basline with your intiator setup.

 

Just so you know I will be offline for a few days incase you think I am ignoring you.

mdgm or one of the others here may continue to help / provide advise

 

An internal request has already been logged to do some documentation around MPIO and I have asked could we try get some basline performance numbers from one of our labs with windows server 2012

 

Message 13 of 31
WingDog
Guide

Re: MPIO - slow speed

So would need the data from the rn4220 if get that far to say is bug.

4220 is in production, so why I don't want to make some hard experiments.

here is SQLio test at 4220 (one NIC)

PS C:\Windows\system32> C:\SQLIO\sqlio.exe -s90 -kW -frandom -b8 -t8 -o16 -LS -BN h:\testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2078193 counts per second
8 threads writing for 90 secs to file h:\testfile.dat
        using 8KB random IOs
        enabling multiple I/Os per thread with 16 outstanding
        buffering set to not use file nor disk caches (as is SQL Server)
using current size: 102400 MB for file: h:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:    19.01
MBs/sec:     0.14
latency metrics:
Min_Latency(ms): 834
Avg_Latency(ms): 6583
Max_Latency(ms): 7838
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 100
PS C:\Windows\system32>

it's X-raid (12*4Tb WD RE 4001-ffsx)

just after this test WEbUI stops working (3 minutes of "Connecting to ReadyNAS admin page.." and "readynas admin page is offline) and SMB shares were reconnected. FURY!!!!

it's freesing or lagging - as you wish to name it.

two minutes more - and it's alive again.

now I can confirm this bug - heavy iSCSI load makes 4220 unstable.

got fresh logs - can upload it or maybe you can create new case (previous was 25222344).

 

For now I would like to see reults with no MPIO and just single NIC so can start with basline with your intiator setup

look at my previous post (or this one).

 

 

 

Message 14 of 31
WingDog
Guide

Re: MPIO - slow speed

Hello!

 

Any news?

Message 15 of 31
mdgm-ntgr
NETGEAR Employee Retired

Re: MPIO - slow speed

Please open a new case and attach your logs. Let me know the case number.

Message 16 of 31
WingDog
Guide

Re: MPIO - slow speed

#25513848

Message 17 of 31
mdgm-ntgr
NETGEAR Employee Retired

Re: MPIO - slow speed

Can you update to 6.3.5-RC2 if not already running that?

Message 18 of 31
WingDog
Guide

Re: MPIO - slow speed

wich device? RN4220 or RDN6?

Message 19 of 31
mdgm-ntgr
NETGEAR Employee Retired

Re: MPIO - slow speed

Both

Message 20 of 31
WingDog
Guide

Re: MPIO - slow speed

now it's 6.4.0. T34 @ RN4220 😃

and still nothing 😉

 

 

Message 21 of 31
WingDog
Guide

Re: MPIO - slow speed

some fresh news (RN4220 NTGR highest device!!!!):

even with 6.4.0 T34 2*1gbe for Write is unavailable - only ~50-60MB/sec write speed with awesome latency (300-2000), so the storage is almost unusable.

w/o MPIO (1 NIC) it's a little faster and can work at 60-70MB/sec with the same shoking latency, but all other services like SMB or GUI is out of service during iSCSI high load.

 

conclusion is simple - at these days NTGR devices has no MPIO and very limited iSCSI support. and this is not Enterprise nor MID-Enterprise level devices.

I do not know what does your QA department do, but obviously something wrong.

Message 22 of 31
WingDog
Guide

Re: MPIO - slow speed

new case

25526961

 

RN4220 is inaccessible within default subnet

Message 23 of 31
WingDog
Guide

Re: MPIO - slow speed

incredible support team answer:

"if device is not halted now I can't escalate case, let's wait.

 

are Netgear still joking?

or Russian Support focused on home routers with "reset" option for any case???!!

 

Message 24 of 31
BrendanM
NETGEAR Expert

Re: MPIO - slow speed

I understand the case was escalated to L3 support and some recommendations were made regarding the network layout. Please let us know if you are seeing improvements now.

Message 25 of 31
Top Contributors
Discussion stats
  • 30 replies
  • 9055 views
  • 0 kudos
  • 5 in conversation
Announcements