NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
The_Duke_of_Ice
May 18, 2011Follower
ReadyNAS 3100 iSCSI'd to ESX 4.1 vCenter, very slow
I am experiencing very slow VM's performance when using the ReadyNAS 3100 (4.2.17) as a iSCSI Datastore on my VM's. ESX 4.1 + vCenter 4.1 + ReadyNAS 3100 in private environment. I have my ESX VMKern...
DAT1
Jul 15, 2011Aspirant
I'm running a pair of environments, one with 2 X ReadyNAS 2100s with 4 TB raw each, 2 X Netgear smart switches, and 3 X Dell 2900 servers running ESX 3.5 u4, and about 25 VMs hosted on NFS shares on the two 2100s. I get fantastic performance out of this set up. Really fast disk access with very low latency. Beats some of the low end NetApps that I used to use. The two ReadyNAS 2100s are still on 4.2.13.
Then I have a near identical setup (I've gone through all of the switch, ESX and ReadyNAS configs over and over again to make sure I haven't missed something) on another network. The main difference is that in the slow environment listed below I have a two port LAG directly connecting the two switches, and on the high performance environment listed above I have a 2 port LAG on each switch that connects back to a "core switch", so really one additional hop, yet much lower latency.
The slow performance network is 2 X ReadyNAS 2100s with 8 TB raw each, 2 X Netgear smart switches, and 3 X Dell r610 servers running ESXi 4.1 u1, and only about 9 VMs hosted on NFS shares on the two 2100s. I get horribly slow performance out of both of them, which I can now see is directly related to incredibly high latency accessing the NFS shares on the ReadyNAS. I also got horribly slow performance when I was running the VMs from the much older HP DL380 G3s (same servers that VMware uses/used for their VCP classes) with ESX 3.5 u4.
Before replacing the 8 year old servers I had thought the slowness was due to the servers being old and cooked, but now, with the new servers I realize that it's a SAN latency issue, since ESXi 4.1 u1 shows you this in the performance monitoring.
When I was using the old HP servers with 3.5, the ReadyNAS had 4.2.13 on both of them. I then upgraded to 4.2.17 on both of them, still had high latency, and they still are running 4.2.17 while being connected to from the new Dell r610 servers.
Your original issue is probably the exact same thing, and shouldn't have anything to do with iSCSI vs NFS, but is a latency issue accessing the ReadyNAS.
I have not been able to figure out what is causing this incredibly high latency, but I agree, it's not normal, nor is it anywhere near acceptable for trying to run servers from it.
Netgear, can you guys take a look at this for us?
Then I have a near identical setup (I've gone through all of the switch, ESX and ReadyNAS configs over and over again to make sure I haven't missed something) on another network. The main difference is that in the slow environment listed below I have a two port LAG directly connecting the two switches, and on the high performance environment listed above I have a 2 port LAG on each switch that connects back to a "core switch", so really one additional hop, yet much lower latency.
The slow performance network is 2 X ReadyNAS 2100s with 8 TB raw each, 2 X Netgear smart switches, and 3 X Dell r610 servers running ESXi 4.1 u1, and only about 9 VMs hosted on NFS shares on the two 2100s. I get horribly slow performance out of both of them, which I can now see is directly related to incredibly high latency accessing the NFS shares on the ReadyNAS. I also got horribly slow performance when I was running the VMs from the much older HP DL380 G3s (same servers that VMware uses/used for their VCP classes) with ESX 3.5 u4.
Before replacing the 8 year old servers I had thought the slowness was due to the servers being old and cooked, but now, with the new servers I realize that it's a SAN latency issue, since ESXi 4.1 u1 shows you this in the performance monitoring.
When I was using the old HP servers with 3.5, the ReadyNAS had 4.2.13 on both of them. I then upgraded to 4.2.17 on both of them, still had high latency, and they still are running 4.2.17 while being connected to from the new Dell r610 servers.
Your original issue is probably the exact same thing, and shouldn't have anything to do with iSCSI vs NFS, but is a latency issue accessing the ReadyNAS.
I have not been able to figure out what is causing this incredibly high latency, but I agree, it's not normal, nor is it anywhere near acceptable for trying to run servers from it.
Netgear, can you guys take a look at this for us?
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!