NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
EstivalG
Mar 09, 2019Aspirant
Average transfer speed (read and write) plummeted after replacing hard drives
Hello everyone
TL;DR: I just replaced my 3TB hard drive with some brand new 8TB hard drive, and the average speed plummeted, from 70MB (switch 1)/100MB (switch 2) to a steady (but slow) 40MB on bot...
EstivalG
Mar 10, 2019Aspirant
Hi Hopchen
I was quite happy with a 70MB/s speed, but it seems that the speed is very dependant of:
_The computer. My previous desktop topped at 50MB/s
_The switch. With a less than 2 years gaming laptop and a new switch, I got 100MB/s with a big file.
I never monitored the unit through top before, and I now got a 70-80% system cpu usage (read or write), which is HUGE. I don't remember such a high system usage, even in my earliest linux days (in the 20th century!! Ho, my!)
Since the only option left is to rebuild a new array, I will try a JBOD setup first, as StephenB suggested, and check for the system usage, and see if the transfer speed is better.
And I will post here the results in a few weeks.
edit: Just tried a direct write:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 35.0169 s, 61.3 MB/s
No I/O wait (wa stays at 0.0%) but a 90% system cpu usage. I think JBOD setup will be better but I guess that RAID1 configuration will be quite slow, I will check if disabled XRAID is better.
Thank you everyone ;)
StephenB
Mar 11, 2019Guru - Experienced User
EstivalG wrote:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 35.0169 s, 61.3 MB/s
What disks are you using?
- EstivalGMar 16, 2019Aspirant
Hello everyone, and many thanks for the replies.
StephenB I now use two IronWolf 8TB: ST8000VN0022, which are present on the RN102 compatibility list.
I used the unit solely as a dedicated storage with no apps or whatsoever, I didn't know there is some tweaks available. So, before destroying the array, I did some test by tweaking SMB daemon with SMBPlus, but it didn't goes well: still 35-40MB/s
So, the old 3TB was readable, all my data was safe, I did destroy the array and recreate a new one with only one disk, in JBOD mode.
the dd test was not the same:
dd bs=1M count=2048 if=/dev/zero of=test2
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 17.3837 s, 124 MB/stop still shows a 90% system cpu usage, but it is now a normal value for a 7200 hard drive.
I then tried to write a 36GB file, and speed is back to 70MB/s, which is really great.
I tried to write the same 36GB file from a linux computer, through NFS, and like Retired_Member said, this is bad: it spawns a lot of nfsd daemons, and the ARM single core can't cope: the speed is below 40MB/s
So, I tried CIFS mount on linux and got a not so bad 58-60MB/s.
And I tried to get back that file on a windows computer, copying on a SSD: 52-55MB/s. Aouch.
Since I have two switchs and was aware that one of them was not very efficient, I tried to hook up the linux computer to the second switch, to see the results: it's way better.
On Linux, switch 2, reading from the NAS and writing on a SSD, I got 70-80MB/s, with the NAS with 20-40% idle. Woa. The linux computer is a P5B Deluxe (yay, 10 years!!) so the NICs may be a little old. And slow.
But I tried the windows 10 computer hooked on the switch 2, and still got 52-55MB/s with a 70% system CPU Usage. I check SMBPlus and the setup was 2.0 minimum and 2.1 maximum, like I setup before destroying the array. On linux, the mount is set on version 2.1.
There is something with the windows 10 read performance, but...
Since the linux computer have a windows 10 system as well, I rebooted on windows 10 on this computer, and try to read the 36GB file: I got the EXACT same speed, with a 70% system CPU Usage. It looks like the way Windows 10 read the file on the NAS put a lot of system stress on it. There is something I don't understand on the way Windows 10 read the file on the NAS.
But, I need to have a RAID 1 NAS as soon as possible, so, having sorting speeds out, I have some data to compare between JBOD mode and RAID1 mode, let's try it! I will dig up the Windows 10 read performance later.
I created a new RAID 1 array, it is resyncing right now and it may take a while, since mdstat stats a 100MB/s speed, lead to a 22 hours syncing time at current speed. But speed usually plummets as it goes near the end of the disk.
As soon as the RAID 1 Array is ready, I will try it and post the performance results here.
Thanks again for the help!
- StephenBMar 17, 2019Guru - Experienced User
Thx for updating us, and please do let us know the speed when the sync finishes.
Are you running 6.9.5 firmware?
- EstivalGMar 23, 2019Aspirant
So, this is now the real deal
_OS 6.9.5
_RAID 1
_Seagate IronWolf 8 TB, ST8000VN0022 (both)
_X-RAID is OFF
_Quota and Antivirus is OFF
Tests on Linux
Although I planned to disable NFS server, I found out that you can set the number of nfsd forks. On a single core ARM CPU, the default number (8) of forks seems utterly stupid, so I changed this value to 1.
Unfortunately, it didn't go up to 40MB/s (write on NAS) (at start, the speed is 100MB/s, due to the speed of the linux hard drive and linux disk cache, so I assumed the linux hard drive is quick enough to go beyond 40MB/s). Average is 32-35MB/s.
Since there is no I/O Wait, I tried 2 forks and it seemed slightly better (36-38MB/s) but I don't think you can get something better with more forks.
However, reading from NAS (2 forks) is fairly good: 62-66MB/s (write on the linux computer's SSD)
In all cases, top shows a high sys CPU usage and si CPU usage (from 60-40 to 70-30) with no wa CPU usage.
Next, CIFS ver 2.1. I kept NAS CIFS's version to 2.1 maximum, to avoid encryption overhead on the NAS CPU. Mount point on linux is fixed as well: mount.cifs //192.168.1.250/Video /media/target/ -o user=foobar,vers=2.1
Write on NAS show a speed from 60 to 70MB/s, with an average of 62-64MB/s. Quite good!
Read from NAS show a speed from 70 to 80MB/s, with an average of 74-76MB/s. Excellent! And what's more, NAS is not at full CPU, showing a 10-20% idle, so I think the NAS may provide a better reading speed with a better Linux computer.
Tests on Windows 10
Write on NAS show a speed from 60 to 70MB/s, with an average of 62-64MB/s. Quite good!
Reading from NAS show a speed from 52 to 55MB/s, with an average of 54MB/s.
I'm quite surprised by this. CIFS Write speeds are the same on both OS - and top shows similar CPU usage - but read speeds are really different, and CPU usage shows differences as well. When a Windows 10 reads a file, there is no CPU idle (using my old linux computer shows CPU idle), CPU sys and CPU si is very high (usually 70-30)
I tried different Windows computers (2013 desktop and 2017 gaming laptop), with the same ethernet cable and switch, and it seems speed is locked at 54-55MB/s on Windows 10. I no longer have any Windows 7 computer.
I check NAS' SMBPlus settings, encryption is OFF and maximum version is 2.1.
Since Linux CIFS write is quite good, I will now start the backup restore from the old 3TB hard drive.
But if you have any clue to understand why Windows 10 reading speed is lower than Linux reading speed, feel free to post :D
Thanks in advance!
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!