NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
sstillwell
Apr 21, 2019Tutor
ReadyTIER with ReadyNAS Pro 6
Hi,
I've got a NAS Pro 6-bay system, currently on NASOS 6.9.5 hotfix 1 and loaded with 4 x WD RED 8 TB drives in FlexRAID RAID 5. Primary use case for the unit is as an NFS server for my VMware ESXi system with about 400 GB of active VMs. There's also a few terabytes of general file server stuff, but it's not frequently accessed.
I've ordered a pair of Samsung 860 PRO 512 GB SSDs and 2.5->3.5 mounting adapters that should let them mount correctly to the NAS sleds, and they should be here Tuesday.
So...here's my putting-the-cart-before-the-horse question: Do you know of any reason this shouldn't work when adding a 512 GB RAID1 SSD tier to the existing volume? At 6.9.5, I should be able to tier metadata, and when 6.10 shows up for available update (I'm not going hunting for it), I can switch it to data tier...modified data per day/week really shouldn't exceed the capacity of the SSDs...this is mostly stuff like mail servers and Gitlab servers for a really small (< 10 people) organization.
For that matter, do you know of any reason that 6.10 shouldn't run on this machine? It's populated with 4 GB RAM.
Thanks in adavance...
sstillwell wrote:
So...here's my putting-the-cart-before-the-horse question: Do you know of any reason this shouldn't work when adding a 512 GB RAID1 SSD tier to the existing volume?
It should work in the Pro-6 (though you do need to change to flexraid in order to use ReadyTier).
The main challenge is adapting the trays for the SSDs, and you've already found mounting brackets for that part.
ReadyTier isn't caching the metadata, so if the SSD RAID group fails you will lose the volume. Your SSDs will reach their write limits at the same time. So you might want to replace one of them about half-way through it's expected life, so you can stagger the replacements.
19 Replies
Replies have been turned off for this discussion
- StephenBGuru - Experienced User
sstillwell wrote:
So...here's my putting-the-cart-before-the-horse question: Do you know of any reason this shouldn't work when adding a 512 GB RAID1 SSD tier to the existing volume?
It should work in the Pro-6 (though you do need to change to flexraid in order to use ReadyTier).
The main challenge is adapting the trays for the SSDs, and you've already found mounting brackets for that part.
ReadyTier isn't caching the metadata, so if the SSD RAID group fails you will lose the volume. Your SSDs will reach their write limits at the same time. So you might want to replace one of them about half-way through it's expected life, so you can stagger the replacements.
Sure, understood re: not a cacheing solution. For what it's worth, the old aluminum Mac Pro towers have slide-in trays for 3.5" drives, and need an identical adapter for 2.5" SSDs, so when trying to find something that makes a 2.5" drive fit mechanically and electrically, those parts are pretty easy to find.
The 860 Pro 512 GB has a guaranteed write lifetime of 5 years or 4,800 TB written (EDIT: it's 600 TBW for the 512 GB drives...still a long lifetime)...It's gonna last a while. Point taken, though. It's unlikely that they'll both die at EXACTLY the same time, but it's sure possible (and Murphy says "likely") that the second one would die before you finished re-syncing the array after replacing the first one.
Thanks for the information!
I've already changed from X-RAID to Flex-RAID.
- SandsharkSensei - Experienced User
I'm pretty sure the Pro is only SATA2, so I wonder how much you gain with SSDs.
Just to confirm...I understand that if the SSD RAID group fails, your volume goes bye-bye, but do I remember correctly reading that you CAN remove the raid group via the UI and the metadata (& data if you're using it) migrate back to the spinning RAID group?
- StephenBGuru - Experienced User
sstillwell wrote:
but do I remember correctly reading that you CAN remove the raid group via the UI and the metadata (& data if you're using it) migrate back to the spinning RAID group?
I don't recall seeing that myself, but it really ought to work that way.
- ovidiuAspirant
I have the RN316 (similar to Readynas Pro, with ver 0.10 RC2.
Using the spearated volume of 2 Raid 1 2Two 256GB SSD's did not do much inhow I use and I want to remove the Tiered Cache (That is a sparate volume)
How do I CAORRECTLY Flush and disassemble the mini 2 disks cache volume to flush the data (I use it onlyu as data cache, not metadata, but I could loose infoe nevertheless).
I am better off with a Raid 6 on my DS315 I think. It;s mostly cold and warm storage now.
(I bought synologues 1019+ for hot data and transcoding)
Okay, I fibbed about waiting for the update to be prompted. I had some slack time tonight and went ahead and shut down all VMs and updated the NAS to 6.10.0...it succeeded and promptly updated itself to 6.10.0 hotfix 2 upon rebooting, and all appears to be well. SSDs and mounting adapters are still projected to arrive tomorrow, so hopefully we press the big "go faster" button then.
I haven't been able to find the reference I saw earlier about removing the SSD tier once it's in place, but I would SWEAR that it said you could remove the tier RAID group without losing the volume. Would REALLY like to find that again before I take irreversible steps.
Ah, found it!
kohdee mentions it in a different thread here: https://community.netgear.com/t5/Using-your-ReadyNAS-in-Business/ssd-tiering/m-p/1661052/highlight/true#M151050
Hopefully I'm reading this correctly as saying you can remove the SSD tier(s) without data loss
A tier is just a special RAID group. You can incrementally expand a tier as long as your Tier is not a RAID 0, and you can also remove a tier. When you are looking at your System > Volumes screen, select the RAID group Tier (the one that says SSD) and click the circle X underneath the dropdown box. It will restore you to pre-tier configuration.
So far, so good...
Metadata took about 1.5 hours to sync to the SSDs, and the entire volume resync is 98.3% done...maybe about 2.5 hours total to complete? Totally guessing. A total of 8 GB of metadata and so far 15 GB of data allocated to SSD.
Best performance seen in the graphs so far is 3500 write/1700 read IOPS. Not bad. CPU temperature went up about 10C when it was syncing metadata, but it's already dropped back down to normal.
Resync 98.65% done while I've been sitting here blathering.
- StephenBGuru - Experienced User
sstillwell wrote:
Best performance seen in the graphs so far is 3500 write/1700 read IOPS.
Resync 98.65% done while I've been sitting here blathering.
I'd be interested in seeing the final iops numbers, now that the resync is completely done.
I haven't found a good real-world benchmark yet across the network, but running a synthetic fio benchmark while SSH'ed into the NAS against the local filesystem, I get some quite nice numbers.
When using queue depth 64, file test size of 10GB, random r/w I get:
12,064 IOPS read with 2.532 mS avg latency
12,057 IOPS write with 2.767 mS avg latency
I should run top in another window to see if I'm CPU bound...one moment...
Yeah, it's CPU-bound at that point. I guess I'll try running FIO from a remote machine to a share on the NAS and see what I can get.
*EDIT* Yeah, running fio on a fire-breathing Mac Pro via SMB share is not nearly as impressive. Getting ~550 IOPS read or write when unbuffered...only change in parameters is pathname and changing libaio (Linux-specific) to posixaio
For reference, I'm using the following fio parameters...
[random-read]
ioengine=libaio
iodepth=64
direct=1
invalidate=1
rw=randrw
size=10G
directory=/volume1/main/IOTest/fio-testing/data
Yeah, VMware's datastore stats are showing an average so far of 7.2mS read and 1.4 mS write latency...that's good enough I'd run a small production SQL box on it even if it was physical hardware, much less virtualized.
I'll shut up now. :)
Running an iozone benchmark remotely over GigE against an SMB share. We'll see how it fares. Won't give me any real IOPS numbers, but more IOPS and lower latency should translate to better throughput in a wider range of block and file sizes.
Okay, I ran a rather lengthy iozone benchmark against the unit over gigabit Ethernet to an SMB share from my Mac Pro. For the most part everything caps at around 100 MB/sec as you'd expect, but there are some interesting graphs for re-read, re-write type operations where caching and SSD migration come into play.
Attached the Excel spreadsheet produced by iozone along with the column graphs I added for each table. *EDIT* Can't post excel spreadsheets, and I can't find a good way to format it to PDF...bummer.
I guess the takeaway from this is that for many use cases, the network is going to be your limiting factor. Doesn't mean it's not worth doing for those things that DO benefit from tiering. The things that benefit REALLY benefit.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!