NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
chadwixk
Dec 31, 2022Aspirant
Copy Speed Test Results to USB 3.0 Drive w/ SSD - why so slow?
NAS: RN102 I need to copy off about 3TB of data...I know not a ton, but was curious to test various copy methods to see which was fastest and just to learn a little more in the process. SOURC...
StephenB
Dec 31, 2022Guru - Experienced User
The slow CPU and limited memory in the RN102 likely is also a factor.
chadwixk wrote:
SOURCE:
The NAS disks are WD Red 5,400rpm, capable of around 110 MB/s reads each. This is a RAID 1 of 2 disks, so assuming it could read in parallel from each disk to have a potential of double that right?
Theoretically, yes. But from what I've read (posts on other forums), a single file is only read from one disk. A second file copy should be read from the second disk if it is done at the same time. But this is not something I've tested.
chadwixk wrote:
But Why, since it is a direct link over USB and sourced from 2 disks in RAID 1, was the speed not near the theorhetical 220 MB/s (read from 2 disks in parallel at 110 MB/s each)? I tried copying multiple files to try to take advantage of this, but I guess these copy commands are sequential in nature...I'll have to look to see how you could actually copy 2 files in parallel to test this aspect further.
Use two ssh sessions to test this.
chadwixk wrote:
DESTINATION:
A usb NVME drive to the rear USB 3.0 port. It is a Samsung 970 Evo SSD with write speeds of around 2,500 MB/s .
But USB 3.0 is ~600 MB/s, so it would be the write bottleneck. Or possibly the NVME adapter you used???
Did you measure the read and write speeds on the RN102 using dd? Might be worth doing (for both source and destination).
FWIW, the NVME formatting might also be relevant here. NTFS is likely slower than ext.
chadwixk wrote:
Windows drag and drop...isn't this a copy over the network, to the client and then back? NAS > Windows Client > NAS > USB Port > NVME SSD?
Yes. As is robocopy.
chadwixk wrote:
rsync -hvrPt --info=progress2 --ignore-existing "/data/Videos/2022/12/2022-12-20-063917.mkv" "/media/USB_HDD_7/Videos/2022/12/"
No one ever accused rsync of being fast. It also verifies, which will of course make quite a difference in transfer times. But might be worth the performance penalty.
If you add --checksum-choice="None", it should be somewhat faster.
chadwixk wrote:
TEST RESULTS:
ssh > cp: 18 MB/scp -r -u -v "/data/Videos/2022/12/2022-12-20-063917.mkv" "/media/USB_HDD_7/Videos/2022/12/"
This one is surprising. It might be worth copying to /dev/null to test the read speed (in addition using dd to test read and write speeds of both devices).
chadwixk
Dec 31, 2022Aspirant
Thank you StephenB for your time in responding and helping me learn and investigate this.
<
I see how to quote you're entire reply, but I don't know how to break up the quote to then insert my replies...curious how if you don't mind...so bear with my formatting below to mimic this...well I could use the raw html editing, but that would take a while...and I'm assuming there is an easier way to do this how you did in your replies
>
The slow CPU and limited memory in the RN102 likely is also a factor.
I'm not a hardware or linux guy (more a microsoft web dev), so not sure if I'm interpreting these properly, but CPU and Mem or not at 100%, but maybe they are effectively at 100%, limiting throughput?
Theoretically, yes. But from what I've read (posts on other forums), a single file is only read from one disk. A second file copy should be read from the second disk if it is done at the same time. But this is not something I've tested.
Tried this, one session running RoboCopy and one running cp, IOSTAT still shows a 20MB/s read and write (in this view CPU also does seem to be pegged).
But USB 3.0 is ~600 MB/s, so it would be the write bottleneck. Or possibly the NVME adapter you used???
True, but we're waaaay less than the 600MB/s bottleneck. The NVME adapter is well doing well over 1,000MB/s from Crystal Disk tests.
Did you measure the read and write speeds on the RN102 using dd? Might be worth doing (for both source and destination).
Looked that up, I'll try after my 2 current copy sessions end.
FWIW, the NVME formatting might also be relevant here. NTFS is likely slower than ext.
True, but I can't imaging that would reduce the 110 MB/s read of the disks down to 1/5. Also doesn't account for the wide range of speeds seen in the test results.
This one is surprising. It might be worth copying to /dev/null to test the read speed (in addition using dd to test read and write speeds of both devices).
This was in response to:
ssh > cp: 18 MB/s
cp -r -u -v "/data/Videos/2022/12/2022-12-20-063917.mkv" "/media/USB_HDD_7/Videos/2022/12/"
Just curious why this one was surprising to you...because it was also ssh and 2.5x the speed of rsync?
To me, the Biggest Surprise is how an over-the-network copy was several factors faster than a direct device copy?!?!? That makes zero sense...in fact opposite a reasonable hypothesis.
- chadwixkDec 31, 2022Aspirant
Actually, I missed the 12% si utilization on the CPU summary in the TOP command pic. So CPU is pegged.
Another interesting thing I see on IOSTAT output is the %utilization of the read HDDs vs the write SSD and also the WAIT on the SSD?
The HDDs show pretty low utilization, and it's % util sort of corresponds with the % of actual to rated read speed...what would be holding this back? CPU/Memory is not designed in this such as to take full advantage of these slow HDDs?
The SSD shows very high utilization when the throughput is way less than capacity. I have no theories as to why this would be. Also note the WAITs for this. No clues on my end.
- StephenBDec 31, 2022Guru - Experienced User
chadwixk wrote:I see how to quote you're entire reply, but I don't know how to break up the quote to then insert my replies...curious how if you don't mind...so bear with my formatting below to mimic this...well I could use the raw html editing, but that would take a while...and I'm assuming there is an easier way to do this how you did in your replies
You can delete stuff out of the quote, and you can quote more than once. There are some situations where the forum software misbehaves - when that happens I work around the issue using the html model in the expanded toolbar.
One thing that can't be quoted is inserted code - </> in the toolbar. Netgear changed the way that works a couple months ago, and introduced a bug. ( Marc_V , JeraldM : it would be very nice if that was fixed!).
chadwixk wrote:I'm not a hardware or linux guy (more a microsoft web dev), so not sure if I'm interpreting these properly, but CPU and Mem or not at 100%, but maybe they are effectively at 100%, limiting throughput?
These numbers can be hard to interpret. In your case, the load average is quite high (over 4). That suggests that you are CPU bound.
chadwixk wrote:
Thank you StephenB for your time in responding and helping me learn and investigate this.
This was in response to:
ssh > cp: 18 MB/s
cp -r -u -v "/data/Videos/2022/12/2022-12-20-063917.mkv" "/media/USB_HDD_7/Videos/2022/12/"
Just curious why this one was surprising to you...because it was also ssh and 2.5x the speed of rsync?
Surprisingly slow.
FWIW, I don't have a comparable NMVE drive to what you have. My RN102 is for testing only, and has two 1 TB Seagate Ironwolf drives in it. When I copy a ~1 GB file between two shares, I see
root@RN102:/data/Pictures# rsync -P /data/Videos/test.avi /data/Pictures test.avi 944,136,192 100% 20.34MB/s 0:00:44 (xfr#1, to-chk=0/1) root@RN102:/data/Pictures#
and
root@RN102:/data/Pictures# rsync -P --checksum-choice="None" /data/Videos/test.avi /data/Pictures test.avi 944,136,192 100% 31.15MB/s 0:00:28 (xfr#1, to-chk=0/1) root@RN102:/data/Pictures#
So noticeably faster with --checksum-choice.
cp is faster:
root@RN102:/data/Pictures# time cp /data/Videos/test.avi /data/Pictures real 0m12.461s user 0m0.030s sys 0m7.330s root@RN102:/data/Pictures#
Doing the math on the total times, I get 70 MB/sec for cp throughput.
Testing the read speed with cp, I see this
root@RN102:/data/Pictures# time cp /data/Videos/test.avi /dev/null real 0m5.359s user 0m0.020s sys 0m3.400s root@RN102:/data/Pictures#
Computed throughput here works out to about 160 MB/sec.
FWIW, dd read speed depends on the block size:
root@RN102:/data/Pictures# time dd if=/data/Pictures/test.avi of=/dev/null 2097152+0 records in 2097152+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.5109 s, 69.2 MB/s real 0m15.536s user 0m2.120s sys 0m11.180s root@RN102:/data/Pictures# time dd if=/data/Pictures/test.avi of=/dev/null bs=1M 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.7195 s, 188 MB/s real 0m5.727s user 0m0.040s sys 0m3.920s
The first test uses a default block size (512 bytes), the second uses 1M.
dd write speed similarly depends on block size (and is much slower for the 512 byte size).
root@RN102:/data/Pictures# time dd if=/dev/zero of=/data/Pictures/test.avi bs=512 count=2097152 2097152+0 records in 2097152+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 82.2124 s, 13.1 MB/s real 1m22.276s user 0m2.610s sys 1m9.120s root@RN102:/data/Pictures# root@RN102:/data/Pictures# time dd if=/dev/zero of=/data/Pictures/test.avi bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 9.21715 s, 116 MB/s real 0m9.263s user 0m0.030s sys 0m5.700s root@RN102:/data/Pictures#
I don't know what block size cp is using on the NAS, but I can approximately match the cp speed with a block size of 1M in dd. (Actually any block size over 128k gives me about 70 MB/s).
root@RN102:/data/Pictures# time dd if=/data/Videos/test.avi of=/data/Pictures/test.avi bs=1M 900+1 records in 900+1 records out 944136192 bytes (944 MB, 900 MiB) copied, 13.7353 s, 68.7 MB/s real 0m13.744s user 0m0.030s sys 0m6.930s root@RN102:/data/Pictures#
rsync appears to be using 128K (which is the max block size you can specify with -B). But still much slower than cp (or dd).
- SandsharkJan 01, 2023Sensei - Experienced User
Your NAS is also quite limited in RAM, so Linux cannot assign as many read/write buffers as it might want to. I'm not sure if it's a big contributor, but it's surely a factor.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!