NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
EstivalG
Mar 09, 2019Aspirant
Average transfer speed (read and write) plummeted after replacing hard drives
Hello everyone
TL;DR: I just replaced my 3TB hard drive with some brand new 8TB hard drive, and the average speed plummeted, from 70MB (switch 1)/100MB (switch 2) to a steady (but slow) 40MB on bot...
Hopchen
Mar 10, 2019Prodigy
HiEstivalG
A RN102 NAS is the entry level NAS. I am actually surprised that you got 70MB/s + speed before. I had one a while ago for testing various things but I would typically be in the range of 30-50MB/s and that was with a 2TB RAID 1.
Expanding the volume size to 8TB is going to take more processing for the single core ARM CPU and 500MB of ram, housed in the NAS. It is not really unexpected that the speeds are going to drop as a result.
You can of course try to factory default the unit in order to get a clean filesystem. However, with 8TB you are probably starting to hit the upper limit of what the unit can realistically handle.
Perhaps it is time to upgrade NAS since it seems your data needs have increased? I reckon that if you monitor the NAS via top command (or similar) while transferring data you will probably see the unit being flat out.
- StephenBMar 10, 2019Guru - Experienced User
FWIW, with my own RN102 I was seeing 70 MB/s read speeds after the reset. I was using it as a backup NAS - configured as jbod with 6 TB and 8 TB disks. That was a few years back though, and the memory footprint of the current firmware is bigger now than it was back then.
- EstivalGMar 10, 2019Aspirant
Hi Hopchen
I was quite happy with a 70MB/s speed, but it seems that the speed is very dependant of:
_The computer. My previous desktop topped at 50MB/s
_The switch. With a less than 2 years gaming laptop and a new switch, I got 100MB/s with a big file.
I never monitored the unit through top before, and I now got a 70-80% system cpu usage (read or write), which is HUGE. I don't remember such a high system usage, even in my earliest linux days (in the 20th century!! Ho, my!)
Since the only option left is to rebuild a new array, I will try a JBOD setup first, as StephenB suggested, and check for the system usage, and see if the transfer speed is better.
And I will post here the results in a few weeks.
edit: Just tried a direct write:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 35.0169 s, 61.3 MB/sNo I/O wait (wa stays at 0.0%) but a 90% system cpu usage. I think JBOD setup will be better but I guess that RAID1 configuration will be quite slow, I will check if disabled XRAID is better.
Thank you everyone ;)
- StephenBMar 11, 2019Guru - Experienced User
EstivalG wrote:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 35.0169 s, 61.3 MB/sWhat disks are you using?
- EstivalGMar 16, 2019Aspirant
Hello everyone, and many thanks for the replies.
StephenB I now use two IronWolf 8TB: ST8000VN0022, which are present on the RN102 compatibility list.
I used the unit solely as a dedicated storage with no apps or whatsoever, I didn't know there is some tweaks available. So, before destroying the array, I did some test by tweaking SMB daemon with SMBPlus, but it didn't goes well: still 35-40MB/s
So, the old 3TB was readable, all my data was safe, I did destroy the array and recreate a new one with only one disk, in JBOD mode.
the dd test was not the same:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 17.3837 s, 124 MB/stop still shows a 90% system cpu usage, but it is now a normal value for a 7200 hard drive.
I then tried to write a 36GB file, and speed is back to 70MB/s, which is really great.
I tried to write the same 36GB file from a linux computer, through NFS, and like Retired_Member said, this is bad: it spawns a lot of nfsd daemons, and the ARM single core can't cope: the speed is below 40MB/s
So, I tried CIFS mount on linux and got a not so bad 58-60MB/s.
And I tried to get back that file on a windows computer, copying on a SSD: 52-55MB/s. Aouch.
Since I have two switchs and was aware that one of them was not very efficient, I tried to hook up the linux computer to the second switch, to see the results: it's way better.
On Linux, switch 2, reading from the NAS and writing on a SSD, I got 70-80MB/s, with the NAS with 20-40% idle. Woa. The linux computer is a P5B Deluxe (yay, 10 years!!) so the NICs may be a little old. And slow.
But I tried the windows 10 computer hooked on the switch 2, and still got 52-55MB/s with a 70% system CPU Usage. I check SMBPlus and the setup was 2.0 minimum and 2.1 maximum, like I setup before destroying the array. On linux, the mount is set on version 2.1.
There is something with the windows 10 read performance, but...
Since the linux computer have a windows 10 system as well, I rebooted on windows 10 on this computer, and try to read the 36GB file: I got the EXACT same speed, with a 70% system CPU Usage. It looks like the way Windows 10 read the file on the NAS put a lot of system stress on it. There is something I don't understand on the way Windows 10 read the file on the NAS.
But, I need to have a RAID 1 NAS as soon as possible, so, having sorting speeds out, I have some data to compare between JBOD mode and RAID1 mode, let's try it! I will dig up the Windows 10 read performance later.
I created a new RAID 1 array, it is resyncing right now and it may take a while, since mdstat stats a 100MB/s speed, lead to a 22 hours syncing time at current speed. But speed usually plummets as it goes near the end of the disk.
As soon as the RAID 1 Array is ready, I will try it and post the performance results here.
Thanks again for the help!
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!