NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
dstsui
Apr 28, 2021Aspirant
RN102 broken after OS 6.10.4 update
My RN102 appears to be broken after updated to 6.10.4. Not sure which came first, I updated the OS to 6.10.4 and noticed very slow transfer speed when I copied hundreds of JPEG images from the SD car...
mdgm
May 02, 2021Virtuoso
Does your system show how much space is consumed by snapshots? If so you have quotas enabled on your data volume and disabling that may help.
By default monthly snapshots are kept indefinitely unless there is limited free space on the data volume
A custom snapshot schedule only keeping a more limited number of snapshots would likely lead to better performance too as StephenB mentioned.
Note that if you delete a snapshot all newer snapshots are recursively updated. This can take a while to complete so performance would get worse before it improves.
StephenB
May 03, 2021Guru - Experienced User
mdgm wrote:
By default monthly snapshots are kept indefinitely unless there is limited free space on the data volume
A custom snapshot schedule only keeping a more limited number of snapshots would likely lead to better performance too as StephenB mentioned.
Note that if you delete a snapshot all newer snapshots are recursively updated. This can take a while to complete so performance would get worse before it improves.
Also, your error is saying that one particular snapshot is somehow damaged and can't be read. We don't know how old that is - and you might not be able to delete it. So there are multiple issues here - the bad disk was one, but the error with the snapshot suggests some file system corruption.
It's possible that the poor performance will continue after you get rid of the ancient snapshots, but even if it does, you need to address the file system corruption.
Overall, the direct way is to do the factory default, set up the NAS again, and restore the files from backup. Though painful, that is guaranteed to clean up everything.
But if you really can't do that, then look for more file system errors after you delete the old snapshots, and look again after the scrub. If they are gone, follow up with a balance (also from the volume settings wheel). Then measure performance again.
- dstsuiMay 03, 2021Aspirant
I could delete all Shares without issues. I started a disk scrub last night and when I checked the status this morning, it has only progressed 2.5%, if the rate is constant, the scrub won't finish for another 13 days. Is it meant to be that slow?
- StephenBMay 04, 2021Guru - Experienced User
dstsui wrote:
I started a disk scrub last night and when I checked the status this morning, it has only progressed 2.5%, if the rate is constant, the scrub won't finish for another 13 days. Is it meant to be that slow?
The rate might not be constant, but it is slower than I'd expect.
I did a scrub on my RN102 (which has a 2x1TB volume) about a month ago, and it took about 5 1/2 hours. So I'd expect about 11-12 hours for 2x2TB.
The scrub is doing two different operations in parallel - one is a RAID scrub (which is the same as a resync), the second is a BTRFS scrub. The BTRFS scrub is what we're aiming for here - how long it takes might be longer because you've never run one.
In any event, I'd wait for it to complete. You could download a fresh set of logs, and see if there are any new disk errors (or btrfs errors).
dstsui wrote:
I could delete all Shares without issues.
Do you mean Snapshots, or did you delete the Shares?
- dstsuiMay 04, 2021Aspirant
The scrub finished at 1:11 this morning, took 25 hrs to complete. Snapper failures still appear in the system log after rebuild, repeating every 43 mins. Is it possible to nail down the exact cause of this problem? No anomalies are observed in kernel log. I observed something odd in the reported consumed size of the Transmission share but did not report it previously. Wonder if this is an OS bug or is it somehow related to the Snapper loading failure. All snapshots were deleted before the scrub began.
Ran NASTester again, the speed is now more reasonable, 54MB/s write and 36MB/s read.
May 05 01:10:57 nas-36-1C-14 mdadm[1926]: RebuildFinished event detected on md device /dev/md127, component device resync May 05 01:17:01 nas-36-1C-14 CRON[24684]: pam_unix(cron:session): session opened for user root by (uid=0) May 05 01:17:01 nas-36-1C-14 CRON[24685]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) May 05 01:17:01 nas-36-1C-14 CRON[24684]: pam_unix(cron:session): session closed for user root May 05 01:17:18 nas-36-1C-14 dbus[1921]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper) May 05 01:17:18 nas-36-1C-14 dbus[1921]: [system] Successfully activated service 'org.opensuse.Snapper' May 05 01:17:19 nas-36-1C-14 snapperd[24713]: loading 2227 failed May 05 01:17:19 nas-36-1C-14 snapperd[24713]: loading 2227 failed May 05 01:17:19 nas-36-1C-14 snapperd[24713]: loading 2227 failed May 05 01:17:19 nas-36-1C-14 snapperd[24713]: loading 2227 failed May 05 01:17:19 nas-36-1C-14 snapperd[24713]: loading 2227 failed May 05 01:20:14 nas-36-1C-14 connmand[1922]: ntp: adjust (slew): -0.000153 sec May 05 01:37:19 nas-36-1C-14 connmand[1922]: ntp: adjust (slew): -0.000953 sec May 05 01:54:24 nas-36-1C-14 connmand[1922]: ntp: adjust (slew): -0.000544 sec May 05 02:00:02 nas-36-1C-14 dbus[1921]: [system] Activating service name='org.opensuse.Snapper' (using servicehelper) May 05 02:00:02 nas-36-1C-14 dbus[1921]: [system] Successfully activated service 'org.opensuse.Snapper' May 05 02:00:02 nas-36-1C-14 snapperd[26740]: loading 2227 failed May 05 02:00:02 nas-36-1C-14 snapperd[26740]: loading 2227 failed May 05 02:00:02 nas-36-1C-14 snapperd[26740]: loading 2227 failed May 05 02:00:02 nas-36-1C-14 snapperd[26740]: loading 2227 failed May 05 02:00:03 nas-36-1C-14 snapperd[26740]: loading 2227 failed
NAS performance tester 1.7 http://www.808.dk/?nastester Running warmup... Running a 400MB file write on R: 5 times... Iteration 1: 56.72 MB/sec Iteration 2: 57.52 MB/sec Iteration 3: 54.34 MB/sec Iteration 4: 56.85 MB/sec Iteration 5: 46.13 MB/sec ----------------------------- Average (W): 54.31 MB/sec ----------------------------- Running a 400MB file read on R: 5 times... Iteration 1: 36.15 MB/sec Iteration 2: 35.40 MB/sec Iteration 3: 36.30 MB/sec Iteration 4: 36.99 MB/sec Iteration 5: 36.05 MB/sec ----------------------------- Average (R): 36.18 MB/sec -----------------------------
- dstsuiMay 04, 2021Aspirant
I was never able to get an uploaded image appear in a post, regardless of the file format. What did it do wrong?
- StephenBMay 05, 2021Guru - Experienced User
dstsui wrote:
I observed something odd in the reported consumed size of the Transmission share but did not report it previously.
Disable volume quota (on the volume settings wheel), and then re-enable it. That should resolve it.
dstsui wrote:
Is it possible to nail down the exact cause of this problem?
The problem is that there is still at least one snapshot in the system, and that is corrupted and cannot be loaded. The simplest way to resolve it is to do a factory default, and rebuild the file system (as I've already recommended). One aspect is that there could be other file system issues lurking under the surface, that might not be so visible.
The subvolumes could be listed via ssh, and you could try manually deleting them. But if they are corrupted, the deletion will probably fail.
dstsui wrote:
I was never able to get an uploaded image appear in a post, regardless of the file format.
All embedded images are manually reviewed and approved before they are shown. That can take a while.
- dstsuiMay 05, 2021Aspirant
I am wondering what is the best practice in terms of setting up backup, as in how to organise the folders. I attached a 2TB 2.5" HDD to one of the bottom USB port at the back. The OS mapped it as a media share and created a folder called USB_HDD_1.
When I set up backup jobs for the first time, I created a separate job for each share and selected a share in the data volume as the source, and selected as destination USB Back Bottom in the folder USB/eSATA .When the backup job was run, the result was that all the sub-folders in the share were dumped at the root of the backup drive, the high level share folder was not copied across. The same happened to all the other shares, so there was no hierarchy to speak of in the backup. Not happy with the results, I then created separate folders in the backup drive that mirror the name of the shares in the data volume and modified the paths in each job accordingly assuming the files will be copied to the specified path. When the backup was run again, the result was the same. The sub-folders were copied to the root of the backup drive and not to the specified folder. It was only until I specified media share USB_HDD_1 as the destination with the appropriate path, then the share folders are copied to the correct folder in the backup drive.
Is there a right way and a wrong way to set up backup? Is there any problem backing up to a USB HDD mapped as a share? If I took out the USB HDD and connect it to a Windows PC, can the PC read the backup files?
- StephenBMay 05, 2021Guru - Experienced User
One question for you is what behavior you want/expect when a file is renamed or deleted?
Your current settings will keep deleted files on the backup drive - and if you rename a file, both the original file and the renamed one will be on the backup drive. There is a trick you can use to get an exact mirror of the share if you want that.
The tradeoff here is that keeping deleted files lets you recover from user error. But it also makes it harder to restore the backup if you need to reload everything onto the NAS (as you'll need to manually remove the files you no longer want).
dstsui wrote:
If I took out the USB HDD and connect it to a Windows PC, can the PC read the backup files?
If the USB HDD is formatted as NTFS, then the PC can read it. If you aren't sure of the formatting, eject it (using the web ui interface), connect it to the PC and check.
dstsui wrote:
Is there any problem backing up to a USB HDD mapped as a share?
That should be fine
dstsui wrote:
It was only until I specified media share USB_HDD_1 as the destination with the appropriate path, then the share folders are copied to the correct folder in the backup drive.
Which is the right way to do it.
dstsui wrote:
When I set up backup jobs for the first time, I created a separate job for each share
Some people prefer to use one backup job for the entire data volume. Personally I prefer having one backup job per share.
One reason I like multiple backup jobs is that it is a bit easier to handle the case where the full backup won't fit on one USB drive. Also, you might have some shares you might not need to back up (for instance, if you have one for transmission).
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!