NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
Blues11
May 09, 2018Luminary
MacOS directories incomplete
My network is small with only Macs connected. It accesses the ReadyNAS via SMB. One directory has about 16K files and/or folders. Often when I open this directory only some of the folders show up in ...
Blues11
Aug 03, 2018Luminary
Follow-up:
Splitting the large directory into two created a difficult situation when trying to search for files/folders so it was not a workable solution.
I decided that creating a new share and copying the files and folders from the large directory might be functional. I've used the Netgear interface to start copying the material in the large directory to the new share, but I've run into a major problem (and a smaller one).
The big problem is that after copying the folders all have today's date as the "Date Modified" not the true date modified in the original directory. How I can copy the folders to the new share without losing the "Date Modified".
The smaller problem is that the tags (MacOS features) are not being copied to the new location. Would you know how I can make sure those tags are copied also?
Perhaps there's a better way to copy than just highlighting folders in the old location, selecting "Copy" and then "Pasting" them into the new share. Is there?
Thank you in advance for any help.
Retired_Member
Aug 03, 2018How about creating a backup of the old share eg to an external source and restoring that to the new share?
- Blues11Aug 03, 2018Luminary
Thanks. That's a good suggestion in that it should accomplish exactly what I need. But with the USB port on the ReadyNAS being USB 2, I think it would take days to simply copy the 2.7TB to an external USB drive and then days to copy it back. (And do you think the NAS OS will preserve the MacOS tags?)
I was hoping to have a function on the NAS that would leave the data in place and just have the indexes for the files point to the new share. But my knowledge of the system is lacking in so many ways.
Again, thank you.
- Retired_MemberAug 04, 2018
I would give it a try with 100GB of your data to
1) be able to project the speed of the process backing up the complete set
2) see, whether the properties of your files are coming out as expected.
To get a fine base for your final judgement I would choose a folder holding a nice mix of filesizes (small - medium - large). Good luck.
- StephenBAug 04, 2018Guru - Experienced User
Blues11 wrote:
I was hoping to have a function on the NAS that would leave the data in place and just have the indexes for the files point to the new share.
Unfortunately you need to use the linux command line interface (logging in with ssh) for that. I explained it earlier in this thread.
The process is to recursively copy all the files to the new share with cp -R --reflink. The metadata in the new share points to the original data blocks, so this command completes quite quickly. Then you can delete the files in the old share (also using ssh). Thanks to the CoW feature of btrfs, the data blocks won't be deleted, only the metadata in the original share.
Although you probably shouldn't do this unless you already have some knowledge of the linux CLI, this approach is the fastest way to move a lot of files between shares. Since the data blocks are never copied, it also doesn't change the free space on the NAS.
Just to be clear here - if you use this method, but don't delete the files in the original share, then initially the files will match. But if you modify a file in one of the shares, the change won't propagate to the other. It would still have the earlier version. Free space will also go down (since you now have both versions of the file in the system).
Similarly, if you delete a file in the original share, it wouldn't be deleted in the new one.
Blues11 wrote:
(And do you think the NAS OS will preserve the MacOS tags?)
I don't use MacOS. But with NTFS, SAMBA stores the extended attributes in hidden files. A backup job should be copying the hidden files (and with drag/drop SAMBA should copy the hidden attribute file along with the visible file). I agree with Retired_Member what you should test it. You can easily create a test share with one or two files.
Blues11 wrote:
But with the USB port on the ReadyNAS being USB 2, I think it would take days to simply copy the 2.7TB to an external USB drive and then days to copy it back.
First of all, the back USB ports on your NAS are USB 3. Only the front port is USB 2.
But if you have enough free space, there is no reason to back up your share to a USB drive and copy it back to the new one. You can set up the backup job to copy the old share directly to the new one.
There is a trick here also - if you want to use rsync (which has some useful advanced options) you can set up the source share as "remote", and use 127.0.0.1 as the IP address. The convention here is that 127.0.0.1 always is the local machine.
- Blues11Aug 04, 2018Luminary
It's taken me some time to read through your detailed response.
I'm not unfamiliar with the unix command line although it's been quite some time since I've done almost anything substantial with it. I read up on the array of attributes on both the rsync command and the cp command. I practiced using the cp command with the -pR options from my Mac and they copied a group of directories beginning with the same letter from the large directory to the new share retaining the correct "created" and "modified" dates/times.
So, I feel confident enough that I'd like to try it on the server. I ssh-ed to the server years ago, but I don't recall how to do it now.
Also, the --reflink didn't work from my Mac (as I suspected) so I assume that that option can only work when I've ssh-ed into the server.
Would you know how I access the server via ssh?
Thank you so much for your patience and perseverance with you welcomed assistance.
- StephenBAug 04, 2018Guru - Experienced User
Blues11 wrote:
Also, the --reflink didn't work from my Mac (as I suspected) so I assume that that option can only work when I've ssh-ed into the server.
Correct. That option is specific to BTRFS.
Blues11 wrote:Would you know how I access the server via ssh?
You enable it on the system->settings->services page on the NAS. This is using the first option here: https://kb.netgear.com/30068/ReadyNAS-OS-6-SSH-access-support-and-configuration-guides Note the warning there on support implications. What you're doing should be fine (unless of course a typo has bad consequences).
After it's enabled, you access the linux shell using terminal (since you are a Mac user). Use root for the username, with the NAS admin password.
- Blues11Aug 05, 2018Luminary
Just to follow up:
I was successful at SSHing into the NAS. I tried a number of different file copy techniques including cp, rsync and rcp. They all worked to some degree.
But, the directory modified and directory created dates never worked correctly. They either made the current date/time the same for both or they made the original date/time the same for both.
To experiment I copied directories from one Mac to another and both dates/times copied perfectly.
I'm thinking that it's an incompatibility between the Apple file system and that of the ReadyNAS.
Again, thank you for all the help.
- Retired_MemberAug 06, 2018
Sorry to hear that.
However, did you try the simple backup/restore approach through the UI already?
- Blues11Aug 06, 2018Luminary
I did try the cp -rp --reflink command and that worked properly: the modified and created dates were the same as the original location. However -- and this was a show-stopper however -- when I, as a test, deleted one of the files in the "copy from" directory, it was deleted from the "copy to" directory.
I don't expect anyone to remember that I started this thread because the 2.7TB directory, when I accessed it via macOS finder, it was often incomplete. It displayed a directory of, for example, as having perhaps 503 items (mostly directories). Then anywhere from 15 seconds to 15 minutes later the directory listing in finder would magically update itself and show the actual 1402 directories.
So, the cp -rp --reflink worked correctly: I now have the new share listing all the files and folders in the new share, but I don't know how to delete the original directory without deleting all the 2.7TB of data from the server.
Is there another command that needs to be executed following the cp -rp --reflink command to remove the old directory without removing all the data from the new share?
- Retired_MemberAug 07, 2018
To my humble opinion with the choosen approach you just copied links to the original data into the new share, which also explains why it was done so quickly.
Reading through StephenB's instructions he claims, that CoW (copy on write) does the magic trick in turning the links into true objects in your new share. As it is possible, that you disabled CoW on your NAS, this might explain the remaining issues, though.
StephenB might want to clarify on this. I would love to understand.
Furthermore I still think, that trying a plain backup/restore through the graphical user-interface would shed some more light on the whole thing. You could use a small subset to test. If this turns out to not work either, to my opinion it would put some urgency on NETGEAR to verify backup/restore as a reliable tool for users in situations like yours.
- StephenBAug 07, 2018Guru - Experienced User
Retired_Member wrote:
To my humble opinion with the choosen approach you just copied links to the original data
That is not correct.
As I'm sure you know, one of the more powerful features in the ReadyNAS is the snapshot feature. When you first create a snapshot of a share, it takes no space. That snapshot doesn't hold links (which would be created with the ln command). Instead the metadata (e.g., the directories) in the snapshot and the main share both refer to the exact same data on the disk.
If you then update a file in the main share, the snapshot continues to refer to the original file, but the main share refers to the revised file. This aspect is at the heart of CoW, and links created with ln just don't work that way. If the snapshot held links, then if you updated a file in the main share, a link in the snapshot would get you the revised file, not the older one. If you renamed a file or deleted it, the link would be broken (not referring to anything).
CoW means "Copy-on-Write". The essence of the idea is that when you write to the disk, BTRFS will make a copy (preserving the original data if another folder is pointing to it). Note this is done block-by-block, not file-by-file. So if you update one block in a gigabyte file, only that one block is copied. The remainder of the blocks continue to be shared.
Snapshots use CoW in a very structured way, but BTRFS itself doesn't require that structure. When you use --reflink, you are using the same underlying feature that the snapshots use, and it functions the same way. The copy does complete quickly (at the speed of a move), just like creating a snapshot does. And like a snapshot, the copy isn't affected if you update, rename, or delete the file in original share. Instead, the two versions go their separate ways (no longer sharing the same datablocks). when you update a file, free space goes down of course, since splitting off the new version (preserving the old one) requires disk space.
I hope this helps - if not, perhaps we should continue on a different thread.
- StephenBAug 07, 2018Guru - Experienced User
Blues11 wrote:
So, the cp -rp --reflink worked correctly: I now have the new share listing all the files and folders in the new share, but I don't know how to delete the original directory without deleting all the 2.7TB of data from the server.
You can just delete it in the original folder, and the copy in the new share will still be there. The more complete explanation on why that's true is above.
Perhaps try it on just one file, so you'll see for yourself.
- Retired_MemberAug 07, 2018
Thank you very much for your detailed information StephenB.
However in message 22 in this thread Blues11 is stating "However -- and this was a show-stopper however -- when I, as a test, deleted one of the files in the "copy from" directory, it was deleted from the "copy to" directory."
That is in contradiction to your conclusion and recommendation to my understanding, because (s)he already tried, what you recommended in your last post and unfortunately failed.
- StephenBAug 07, 2018Guru - Experienced User
Retired_Member wrote:
Thank you very much for your detailed information StephenB.
However in message 22 in this thread Blues11 is stating "However -- and this was a show-stopper however -- when I, as a test, deleted one of the files in the "copy from" directory, it was deleted from the "copy to" directory."
I missed that bit. However, that is not what happens on my NAS. Here's a sample
root@NAS:/data/Test/folder-1# ls /data/Dummy test.png root@NAS:/data/Test/folder-1# cp -rp --reflink /data/Dummy/test.png . root@NAS:/data/Test/folder-1# ls test.png root@NAS:/data/Test/folder-1# rm /data/Dummy/test.png root@NAS:/data/Test/folder-1# ls test.png root@NAS:/data/Test/folder-1#
Dummy and Test are different shares. You can see that deleting the original file didn't delete the copy I made with --reflink.
In my test, Dummy has bit-rot protection turned off, Test has it turned on. But that shouldn't matter for this. I get the same results if I reverse the from and to folders.
So I don't get why Blues11 got a different result. I do think he should retest it, in order to verify the behavior.
- Blues11Aug 08, 2018Luminary
My apologies for not replying for a couple of days. The bottom line is that I got to the point where I couldn't spend much more time on this issue -- at this time, at least. I would eventually like to learn more about these details, but I had work to do.
First, I tried to reproduce the issue where I had deleted a file (or the reference?) and I found that I must have deleted the item in the wrong location. This meant that I screwed up. To the poster that was befuddled my this, my apologies. I hope you didn't spend too much time on it. It was me being sloppy.
Eventually I was able to create a new share, with just the 2.7TB of directories and files in it. Mostly the create dates are correct. I had to manually set the Apple "tags" on every directory that had one and understandably that meant that all of those directories had current modified dates.
So far when I've opened the new share it seems that I only had the delay of the directory listing one time and that was soon after it was created. This means that my original reason for this thread may now be resolved.
If I find that I still have the problem, I will post again.
Thanks to all who contributed your thoughts and efforts. They helped to educate me and (fingers crossed) to resolve my issue.
- Blues11Aug 08, 2018Luminary
Sorry I forgot to do this earlier.
A heartfelt thank you for all your help!
Blues11
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!