NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
tburghart
May 14, 2013Aspirant
Best iSCSI approach on ReadyNAS NV
First, let's skip the responses about the NV not having enough CPU power to use iSCSI effectively :lol:
So, I have an old NV with 1GB RAM, 4x750GB 7200 RPM X-RAID, RAIDiator 4.1.10 and iSCSI target v1.4.20.2-readynas-1.0.5
It's been chugging along just fine continuously since it was new in a variety of configurations, and aside from being woefully underpowered has been consistently reliable for however many years that's been. For the past few years it's been functioning primarily as an NFS-mounted backup directory hanging off an OSX server, along with a few AFP shares accessible across the network, but I've been toying with connecting it via iSCSI instead so that I can reap the benefits of being able to treat it as a native GPS-partitioned HFS+ drive with the AFS sharing handled by the much more capable OSX server.
I've created a 64GB test LUN on the filesystem and it works well enough that I think I'll convert the whole array over to iSCSI. The question now is what the best iSCSI configuration on the NV is going to be to maximize the unit's paltry performance. I've noticed that the iSCSI LUN on top of a flat file doesn't seem to use the NV's cache memory very effectively, so I'm wondering how best to improve on that. For instance, writing a half-GB file via AFP clearly caches the file pretty effectively, consuming it at about 172% of the device's local filesystem write speed (the speed at which dd can create a multi-GB file as root from the NV command line). The same file written via iSCSI to the file-based LUN gets to the device at just 59% of the local write speed. While I didn't test with NFS this time, it historically has come in around 90% IIRC. I have iSCSI header and data digests turned off at the moment just to conserve CPU cycles, and at no point during my tests did the NV's CPU get maxed out.
While the benefits of the GPS/HFS+ configuration still outweigh the performance hit for my use case, I'd obviously like to get the best performance I can. So I'm wondering whether removing the /c filesystem and building the iSCSI target LUN on top of the block RAID device /dev/c/c (aka /dev/mapper/c-c), and/or configuring the LUN Type as block instead of fileio, would improve or degrade the performance.
Clearly, where the cache sits in the stack will have an effect. I'd assumed it lived between the kernel and filesystem, but the difference between iSCSI and AFP IO throughput suggests that it may occupy a position higher up in the stack. So, is there a way to get the GB of RAM I've got in the NV to effectively cache the iSCSI IO, and what LUN configuration is likely to perform best?
Thanks in advance,
- Ted
So, I have an old NV with 1GB RAM, 4x750GB 7200 RPM X-RAID, RAIDiator 4.1.10 and iSCSI target v1.4.20.2-readynas-1.0.5
It's been chugging along just fine continuously since it was new in a variety of configurations, and aside from being woefully underpowered has been consistently reliable for however many years that's been. For the past few years it's been functioning primarily as an NFS-mounted backup directory hanging off an OSX server, along with a few AFP shares accessible across the network, but I've been toying with connecting it via iSCSI instead so that I can reap the benefits of being able to treat it as a native GPS-partitioned HFS+ drive with the AFS sharing handled by the much more capable OSX server.
I've created a 64GB test LUN on the filesystem and it works well enough that I think I'll convert the whole array over to iSCSI. The question now is what the best iSCSI configuration on the NV is going to be to maximize the unit's paltry performance. I've noticed that the iSCSI LUN on top of a flat file doesn't seem to use the NV's cache memory very effectively, so I'm wondering how best to improve on that. For instance, writing a half-GB file via AFP clearly caches the file pretty effectively, consuming it at about 172% of the device's local filesystem write speed (the speed at which dd can create a multi-GB file as root from the NV command line). The same file written via iSCSI to the file-based LUN gets to the device at just 59% of the local write speed. While I didn't test with NFS this time, it historically has come in around 90% IIRC. I have iSCSI header and data digests turned off at the moment just to conserve CPU cycles, and at no point during my tests did the NV's CPU get maxed out.
While the benefits of the GPS/HFS+ configuration still outweigh the performance hit for my use case, I'd obviously like to get the best performance I can. So I'm wondering whether removing the /c filesystem and building the iSCSI target LUN on top of the block RAID device /dev/c/c (aka /dev/mapper/c-c), and/or configuring the LUN Type as block instead of fileio, would improve or degrade the performance.
Clearly, where the cache sits in the stack will have an effect. I'd assumed it lived between the kernel and filesystem, but the difference between iSCSI and AFP IO throughput suggests that it may occupy a position higher up in the stack. So, is there a way to get the GB of RAM I've got in the NV to effectively cache the iSCSI IO, and what LUN configuration is likely to perform best?
Thanks in advance,
- Ted
3 Replies
Replies have been turned off for this discussion
- chirpaLuminaryMaybe try in Flex-RAID mode, create a RAID-? volume, but then delete it and point the iSCSI LUN there. FrontView will complain if it can't create shares on a volume.
I wish there was a raw block device option, but that would have taken too much re-development in FrontView code to handle it properly.
Also, I haven't checked, but can the iSCSI target be a number of sparse disks? Kinda like VMFS being chunked files. So having like (32) 2GB files, all showing as a large 64GB block device. Then maybe the system could cache partial files easier. That could also decrease risk of corruption. If one file gets eaten by EXT4 bugs, only part of the data is gone, not the whole shebang like the stock implementation. One wrong move by EXT4, and a fsck can send your 64GB flat file to lost+found heaven. - tburghartAspirantThat's a good point about ext4 adding an additional corruptible layer, so that really argues strongly for building the LUN directly on the RAID device. I have no qualms about mucking around behind FrontView's back, I've done it a lot over the years to the point that my shares aren't managed through it anyway because I use settings it doesn't offer 8)
The iSCSI target doesn't offer sparse allocation or over-commitment, but since I'm only planning to configure one big LUN that wouldn't really offer any benefits. I'll probably just configure a small volume to keep FrontView happy and hand the rest over to iSCSI - or just turn FV off completely and manage everything from the shell.
That still leaves the question of whether there's any way to get the cache into the iSCSI stack - any guidance there?
- Ted - chirpaLuminaryIt's been so many years since I've been hands on with the Sparc platform, I'm not the best to help here. Maybe someone else that uses it a lot still can comment.
There may be some /proc kernel VM settings you could tweak to better shove the iSCSI block down the caches throat.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!