NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
sstillwell
Apr 21, 2019Tutor
ReadyTIER with ReadyNAS Pro 6
Hi,
I've got a NAS Pro 6-bay system, currently on NASOS 6.9.5 hotfix 1 and loaded with 4 x WD RED 8 TB drives in FlexRAID RAID 5. Primary use case for the unit is as an NFS server for my VMwar...
- Apr 21, 2019
sstillwell wrote:
So...here's my putting-the-cart-before-the-horse question: Do you know of any reason this shouldn't work when adding a 512 GB RAID1 SSD tier to the existing volume?
It should work in the Pro-6 (though you do need to change to flexraid in order to use ReadyTier).
The main challenge is adapting the trays for the SSDs, and you've already found mounting brackets for that part.
ReadyTier isn't caching the metadata, so if the SSD RAID group fails you will lose the volume. Your SSDs will reach their write limits at the same time. So you might want to replace one of them about half-way through it's expected life, so you can stagger the replacements.
sstillwell
Apr 24, 2019Tutor
Metadata took about 1.5 hours to sync to the SSDs, and the entire volume resync is 98.3% done...maybe about 2.5 hours total to complete? Totally guessing. A total of 8 GB of metadata and so far 15 GB of data allocated to SSD.
Best performance seen in the graphs so far is 3500 write/1700 read IOPS. Not bad. CPU temperature went up about 10C when it was syncing metadata, but it's already dropped back down to normal.
Resync 98.65% done while I've been sitting here blathering.
- StephenBApr 24, 2019Guru - Experienced User
sstillwell wrote:
Best performance seen in the graphs so far is 3500 write/1700 read IOPS.
Resync 98.65% done while I've been sitting here blathering.
I'd be interested in seeing the final iops numbers, now that the resync is completely done.
- sstillwellApr 24, 2019Tutor
I haven't found a good real-world benchmark yet across the network, but running a synthetic fio benchmark while SSH'ed into the NAS against the local filesystem, I get some quite nice numbers.
When using queue depth 64, file test size of 10GB, random r/w I get:
12,064 IOPS read with 2.532 mS avg latency
12,057 IOPS write with 2.767 mS avg latency
I should run top in another window to see if I'm CPU bound...one moment...
Yeah, it's CPU-bound at that point. I guess I'll try running FIO from a remote machine to a share on the NAS and see what I can get.
*EDIT* Yeah, running fio on a fire-breathing Mac Pro via SMB share is not nearly as impressive. Getting ~550 IOPS read or write when unbuffered...only change in parameters is pathname and changing libaio (Linux-specific) to posixaio
For reference, I'm using the following fio parameters...
[random-read]
ioengine=libaio
iodepth=64
direct=1
invalidate=1
rw=randrw
size=10G
directory=/volume1/main/IOTest/fio-testing/data - sstillwellMay 13, 2019Tutor
StephenB wrote:
sstillwell wrote:
Best performance seen in the graphs so far is 3500 write/1700 read IOPS.
Resync 98.65% done while I've been sitting here blathering.
I'd be interested in seeing the final iops numbers, now that the resync is completely done.
I've enabled file indexing in the NAS control panel, and my Unitrends backup appliance VM is doing a dedupe run across the backup repository (which is an NFS share), which is fairly intensive. I'm still getting around 3800 write IOPS and 1100 read IOPS pretty consistently graphed by one-minute intervals over the past 12 hours. I'm pretty sure that's more than 4 x 5400RPM drives are going to handle by themselves. :)
- StephenBMay 13, 2019Guru - Experienced User
Thx for the update
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!