NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
btaroli
Oct 07, 2017Prodigy
iSCSI performance
I have read different threads in the forums about iSCSI performance woes. I just wanted to post my own experience just recently, in the hopes it may add a data point to this and perhaps help me understand if there is antyhing I could be doing differently in settings that might help this.
I'm doing some test installations of Oracle Key Vault (an appliance for which they publicly offer ISOs for scripted installation) and just have been running a test where I have one using a 240GB thick iSCSI LUN (connected via Virtualbox storage layer) and another using a 240GB thin provisioned VDI over CIFS. Both of these are VBox VMs (5.1.28) with 2 CPU, 3GB RAM and running on a macOS host (10.13+supplement).
I am obvserving that the VDI-based VM is totally kicking the ass of the iSCSI LUN VM for package installation and DB setup. These VM aren't pushing much CPU but they're doing a fair mount of I/O. I have been watching htop on the NAS as well and it's not very busy at all. I expect lots of a small random I/O.
I don't really have a specific expectation for how well iSCSI /should/ be performing, but the difference between these two is glaring. And it leads me to wonder if iSCSI could be doing better. As I noted, there are several iSCSI threads... but rathe than pollute one of those I figured I'd start one specifically on this since I was comparing a CIFS VDI to an iSCSI LUN.
Oh, and I did happen to catch the VM with iSCSI throw some errors during the install.. or at least warnings.
There is no shell access to the VM after it's built (the appliance erects a firewall during the build and no shell access is available on the console), but I have found I can mount the filesystems when booted from a LiveCD in a pinch. ;)
Any thoughts? Configuration bits I might check?
4 Replies
Replies have been turned off for this discussion
- mdgm-ntgrNETGEAR Employee Retired
Are you using thick LUNs with bit-rot protection and snapshots disabled?
- btaroliProdigy
Sorry for the late response! Well, I was using both thick and thin LUNs (called out in original post). I did not enable bitrot protection, because I'd presume COW would just slow things down. Indeed, I don't even run VMs normally (in my Linux environments where I use btrfs) from paths with COW enabled.
- mdgm-ntgrNETGEAR Employee Retired
Thick LUNs are best for VMs. Yes, leaving bit-rot protection disabled is best as well.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!