NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
PredatorVI
Aug 25, 2014Tutor
ReadyNAS 4220 iSCSI performance w/ VMWare... #23771040
Is there a tuning guide for using ReadyNAS 4220 w/ 10GbE and VMWare ESX 5? Also what would an average transfer rate be when using 10GbE and iSCSI?
Here is my current setup:
Juniper EX4550
ReadyNAS 4220 w/ 4xGbE, 2x10GbE, 12TB (6x2TB)
IBM x3550 M4 - 2 x Xeon E5-2690 v2 (3.00 GHz) 10 core/ 128 GB RAM / 1.6 Tb (3 x 600GB SAS RAID 0)
The ReadyNAS and ESX hosts all have dual port 10GbE NIC's.
Jumbo frames have been enabled on the switch, NAS and ESX hosts.
The ReadyNAS 10 GbE ports are bonded.
Each ESX host has two vSwitches configured with one 10GbE interface assigned to each vSwitch. I added both 10GbE vSwitches to the iSCSI Initiator "Network Configuration" tab giving it two paths (one active, one failover).
I then configured one Ubuntu VM running on a host connected to the shared VMFS filesystem. I placed the VMDK's on the shared iSCSI target and ran a simple test like below:
time sh -c "dd if=/dev/zero of=testfile bs=384k count=50k && sync"; rm -rf testfile
RESULTS (20 GB file):
149 MB/sec. - VM Disks reside on NAS (iSCSI)
612 MB/sec. - VM Disks reside on local host.
The iSCSI numbers seem low to me for a 10GbE connection. I'm not sure if it is tuning or something in the configuration or simply that the software iSCSI initiator in VMWare doesn't perform as well.
Any guidance is appreciated.
Here is my current setup:
Juniper EX4550
ReadyNAS 4220 w/ 4xGbE, 2x10GbE, 12TB (6x2TB)
IBM x3550 M4 - 2 x Xeon E5-2690 v2 (3.00 GHz) 10 core/ 128 GB RAM / 1.6 Tb (3 x 600GB SAS RAID 0)
The ReadyNAS and ESX hosts all have dual port 10GbE NIC's.
Jumbo frames have been enabled on the switch, NAS and ESX hosts.
The ReadyNAS 10 GbE ports are bonded.
Each ESX host has two vSwitches configured with one 10GbE interface assigned to each vSwitch. I added both 10GbE vSwitches to the iSCSI Initiator "Network Configuration" tab giving it two paths (one active, one failover).
I then configured one Ubuntu VM running on a host connected to the shared VMFS filesystem. I placed the VMDK's on the shared iSCSI target and ran a simple test like below:
time sh -c "dd if=/dev/zero of=testfile bs=384k count=50k && sync"; rm -rf testfile
RESULTS (20 GB file):
149 MB/sec. - VM Disks reside on NAS (iSCSI)
612 MB/sec. - VM Disks reside on local host.
The iSCSI numbers seem low to me for a 10GbE connection. I'm not sure if it is tuning or something in the configuration or simply that the software iSCSI initiator in VMWare doesn't perform as well.
Any guidance is appreciated.
5 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee RetiredIf you update to a 6.1.9 RC (http://www.readynas.com/forum/viewtopic.php?f=154&t=72282) and disable sync writes that should help.
Also make sure you use thick LUNs. Thin LUNs may take up less space on a volume but there is a performance trade-off.
I would suggest you put the NAS on a UPS.
If you could also contact our support team and let me know the case number that would be good.
Our support team may have some additional suggestions that may help. - I'll work on that in the morning....I should just message you directly. Thanks!
- I upgraded the build, disabled sync writes and also discovered that the port group on the switch didn't have the right MTU set. I validated all other MTU's.
I'm now getting 415-425 MB/s.
I've created a thick LUN and am migrating the VM's over from the thin LUN. I didn't see a way to just convert the LUN in place.
It will be a while before I can get a UPS on the NAS. :( - For the sake of completeness, I migrated all data over to a thick provisioned LUN. I'm averaging around 450MB/s write speeds. However after several consecutive runs, the disk latency caused a 75% drop in throughput and started spiking disk latency warnings in the vCenter console.
After some searching/tweaking, it was suggested that when using "Round-Robin" multi-pathing (dual paths) in VMWare that the IOPS setting on the device be dropped from a default of 2000 to 1. This helped a fair amount and showed an increase in the data transfer over both iSCSI paths to the NAS and only a drop of about 40% on back-to-back runs.
I am working on getting a UPS for it to get a bit more out of it.
What I don't know is whether 450MB/s write speeds is what I should be expecting over dual 10GbE interfaces on the ReadyNAS 4220.
Are there any memory or RAID controller upgrades that might help in the future? Any other tips? I know on a physical server that a battery-backed write cache can do wonders. Curious what the configuration is on the 4220. - mdgm-ntgrNETGEAR Employee RetiredThe RAID-10 suggestion that the tech handling your case made is a good one.
We don't support RAM upgrades or RAID controller upgrades.
As newer faster disks become available that should help with performance a little.
It's possible that future firmware updates may bring changes that help with performance a little, but even then the difference may be marginal and not noticeable. It does depend a fair bit on whether improvements made to BTRFS over time improve performance or not.
Can you attach your logs zip file to your case?
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!