NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

russrtw1's avatar
russrtw1
Aspirant
Oct 25, 2010

Poor performance of 4200 on XenServer 5.60

Hardware:

Two IBM xSeries servers with dual Xeon Quad-Core processors and 64GB RAM each
ReadyNAS 4200 12TB Model (no 10G ethernet): LACP bonding of both 1 gig ports
GS724TR gigabit switch with latest firmware (as of a week or two ago)
CAT5E (big E) used between all devices

Software:

XenServer 5.60 Enterprise with both servers and ReadyNAS in the same managed "Pool"
RAIDiator 4.2.15
Various server 2003 x86/x64 and 2008 R2 x64 VM's running in this pool

Setup:

Both servers configured within XenServer for standard NIC load-balancing/failover on both ports (plugged into the GS724TR)
ReadyNAS 4200 configured for LACP 802.3ad bonding (properly setup on both the ReadyNAS and the GS724TR)
iSCSI used for VM storage via various LUN's created on the ReadyNAS' iSCSI target
Standard frames (non-jumbo) to ensure interoperability with other network devices
All performance optimizations done on the 4200 via the forums and NETGEAR's recommendations

Problems:

Overall performance seems slow. The closest comparison we could do was with a Dell MD3000i SAN running on VMWare vSphere which can easily handle the 10+ VM's that we're running on the 4200. The latency seems to be the biggest problem on the 4200... but data transfers are slower than expected as well.

Questions:

1) Does anyone NOT recommending using bonding/teaming for heavy traffic situations such as this?

2) Has anyone encountered performance issues using LACP bonding on a 4200 (or any ReadyNAS for that matter)

10 Replies

Replies have been turned off for this discussion
  • Hi Russ - I see no answer to this question. Did you get an answer to this issue through tech support and were you able to get performance improved ?
  • They're putting together some test environments at Netgear to find out what is causing the big hits in performance in VM environments with higher disk IO. I'll update the thread when I find out the results.
  • It turns out the issue wasn't anything to do with LACP. More so with disk caching on WD drives that are included. They haven't told me formally what the problem is, but I should know soon.
  • russrtw wrote:
    It turns out the issue wasn't anything to do with LACP. More so with disk caching on WD drives that are included. They haven't told me formally what the problem is, but I should know soon.

    thanks for your reply

    please check your MP
  • I am setting up a similar environment with Citrix (4.0), VM, and a 12TB 4200. I am very interested in learning the final outcome from NG.
  • I think they're close to releasing a new update that addresses a lot of these issues that we're experiencing (and that NG is noticing). I'll update ASAP.
  • Just an update to this thread. Since drives 1-4 and 5-12 are on different controllers (on the 4200), Netgear Level3 support and I re-built the entire unit after copying the data to a new "temp" 4200.

    We ultimately did two things for the best performance with multiple VM's:

    1) Setup three RAID 10 volumes: Disks 1-4, 5-8, and 9-12. Created one NFS share for each volume and distributed the VM's across those three NFS shares.

    2) Used NFS vs iSCSI (slightly less overhead) and changed thread count to 6.

    This alone made an enormous difference. I believe the write penalty using XRAID2 before (RAID 6) really hurt the overall performance, and we just didn't see this unit shine until this reconfiguration.
  • How about this year, are there any modifications made so far that boosts its overall performance?
  • Not as far as I know. EXT3/4 is just inferior when it comes to running virtualization datastores on.

    I've installed several of their ReadyDATA 5200 units since then (Running Nexenta's version of ZFS), with configurations consisting of SATA,SAS, and SSD drives. These units blow away the 4200's for close to the same cost.

    The ReadyDATA's are much faster, more capable units with online drive-shelf expansion, unlimited "zero performance cost" snapshots, and built-in block replication/dedupe. ZFS itself (Oracle originally created) is an almost identical file system to NetApp's WAFL... where they both are RoW (Redirect on Write) vs file systems like EXT that are of CoW (Copy of Write) nature. NetApp does have an advantage that their dedupe can be run asynchronously, vs ZFS being real-time. I don't run ZFS dedupe on VMware / XenServer datastores, but if you have enough write-cache SSD's attached to the RAID groups, you can potentially alleviate the ZFS dedupe penalty altogether.

    Probably a more lengthy answer than you were expecting, but hope it helps.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More