NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
gpaolo
Jan 13, 2020Luminary
Disk replacement - back sync and again degraded after reboot
Hi all, I am having some trouble with my RN524. I have 2x1TB and 2x4TB disks, each couple in RAID1 on my NAS. A few days ago, while I was away (bacause it always happens while I am away...) one of t...
- Jan 26, 2020
I have returned finally home and swapped the drives... and it works fine.
Oh well, problem solved, next time use more brain...
Thank you everyone, sorry for the mistake!
gpaolo
Jan 13, 2020Luminary
I have the feeling that something got stuck in the OS. I have tried to format the disk and nothing changed. I have tried also to set it as global spare but nothing changed either. I'm not sure of what I can do remotely once I leave, I'm quite nervous about leaving the NAS for two more weeks without redundancy... I have the backup, but still...
- StephenBJan 13, 2020Guru - Experienced User
gpaolo wrote:
I have tried also to set it as global spare
Why? FWIW, you can't do that if the disk is already part of a volume.
gpaolo wrote:
I'm quite nervous about leaving the NAS for two more weeks without redundancy... I have the backup, but still...
I suggest downloading the log zip file. System.log and Kernel.log normally will contain any disk errors or btrfs file system errors, so look in those files. Also look at the SMART stats in disk_info.log - particularly for reallocated or pending sectors.
- SandsharkJan 14, 2020Sensei
It sounds like your NAS is undecided about that drive's status. Prevention of formatting and inability to expand using it or set it as a spare would normally mean the drive is a part of the volume. But the display clearly shows it's not.
Download the log zip file and look at mdstat.log and see if it shows the drive as a part of the volume. Paste it in a message here if you need help interpreting. Maybe the GUI and the underlying Linux system are out of sync.
The next thing you should do is insure that your backup is up to date. When the NAS gets into a "grey area", as yours seems to have done, volume loss becomes a bigger risk.
If the log says the drive is a part of the volume, then a reboot might clear things up. If it's not, removing and re-installing (with power on) may change things. If you have the ability to test the drive with vendor tools on a PC while it's out, that's also a good step.
- gpaoloJan 15, 2020Luminary
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
Resuming, thank you both of course for your suggestions. I have downloaded the logs, this is the content of mdstat:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md126 : active raid1 sda3[0] sdb3[1] 971912832 blocks super 1.2 [2/2] [UU] md127 : active raid1 sdd3[1] 3902168832 blocks super 1.2 [2/1] [_U] md0 : active raid1 sda1[0] sdd1[3] sdb1[1] 4190208 blocks super 1.2 [3/3] [UUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Fri May 18 21:43:37 2018 Raid Level : raid1 Array Size : 4190208 (4.00 GiB 4.29 GB) Used Dev Size : 4190208 (4.00 GiB 4.29 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Jan 13 11:43:05 2020 State : active Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 2fe75bc5:0 (local to host 2fe75bc5) UUID : ec490464:fbbd3e14:c6a1b5d7:03ec6667 Events : 32940 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 3 8 49 2 active sync /dev/sdd1 /dev/md/Volume1TB-0: Version : 1.2 Creation Time : Fri May 18 21:57:08 2018 Raid Level : raid1 Array Size : 971912832 (926.89 GiB 995.24 GB) Used Dev Size : 971912832 (926.89 GiB 995.24 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Tue Jan 14 22:18:07 2020 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 2fe75bc5:Volume1TB-0 (local to host 2fe75bc5) UUID : 84a155e1:4166905b:af7b6c5d:15ad37ba Events : 47 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 /dev/md/Volume4TB-0: Version : 1.2 Creation Time : Fri May 18 21:57:35 2018 Raid Level : raid1 Array Size : 3902168832 (3721.40 GiB 3995.82 GB) Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Tue Jan 14 12:06:24 2020 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Consistency Policy : unknown Name : 2fe75bc5:Volume4TB-0 (local to host 2fe75bc5) UUID : 30c9acd6:d608709f:a6ed80df:cb332b59 Events : 3340 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 51 1 active sync /dev/sdd3I'm not sure if I understand it correctly: does it say that the new disk has been assigned to a new volume?
I have already tried in the past days to reboot and to remove and reinstall the disk, but nothing changed. I guess the only thing I can do now is when I get back to remove the disk, format it on a PC and put it back?
- StephenBJan 16, 2020Guru - Experienced User
gpaolo wrote:
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
There is an automatic spam filter that caught your messages - I released them.
gpaolo wrote:
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
Resuming, thank you both of course for your suggestions. I have downloaded the logs, this is the content of mdstat:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md126 : active raid1 sda3[0] sdb3[1] 971912832 blocks super 1.2 [2/2] [UU] md127 : active raid1 sdd3[1] 3902168832 blocks super 1.2 [2/1] [_U] md0 : active raid1 sda1[0] sdd1[3] sdb1[1] 4190208 blocks super 1.2 [3/3] [UUU]Let's start with this.
- md1 is the swap partition, and it uses all four disks.
- md0 is the OS partition, and that should also hold 4 disks. It is missing sdc (which is normally the disk in bay 3, but not always). But md0 isn't degraded - the system never added sdc to the array.
- md126 is the 1 TB data volume, and it looks fine (sda and sdb are both in it).
- md127 is the 4 TB volume, and it is reported as degraded (sdc and sdd are both supposed to be in it, but sdc isn't).
It is weird that the system never added sdc to the array.
Are you seeing any disk errors for either sdc or sdd reported in system.log and kernel.log? (PM me if the spam filter kicks in again).
- gpaoloJan 16, 2020Luminary
Ok I don't know what is happening, I have sent twice already the message and it disappears... I will try again. See if also this one vanishes...
- gpaoloJan 16, 2020Luminary
Thank you both for the suggestion, I have been trying to post for the past two days, I see the message loaded but then when I reload the page the day after the message is gone...
I have dowloaded the logs, this is the content:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md126 : active raid1 sda3[0] sdb3[1]
971912832 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdd3[1]
3902168832 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sda1[0] sdd1[3] sdb1[1]
4190208 blocks super 1.2 [3/3] [UUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Fri May 18 21:43:37 2018
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistentUpdate Time : Mon Jan 13 11:43:05 2020
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 2fe75bc5:0 (local to host 2fe75bc5)
UUID : ec490464:fbbd3e14:c6a1b5d7:03ec6667
Events : 32940Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
3 8 49 2 active sync /dev/sdd1
/dev/md/Volume1TB-0:
Version : 1.2
Creation Time : Fri May 18 21:57:08 2018
Raid Level : raid1
Array Size : 971912832 (926.89 GiB 995.24 GB)
Used Dev Size : 971912832 (926.89 GiB 995.24 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistentUpdate Time : Tue Jan 14 22:18:07 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 2fe75bc5:Volume1TB-0 (local to host 2fe75bc5)
UUID : 84a155e1:4166905b:af7b6c5d:15ad37ba
Events : 47Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
/dev/md/Volume4TB-0:
Version : 1.2
Creation Time : Fri May 18 21:57:35 2018
Raid Level : raid1
Array Size : 3902168832 (3721.40 GiB 3995.82 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistentUpdate Time : Tue Jan 14 12:06:24 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0Consistency Policy : unknown
Name : 2fe75bc5:Volume4TB-0 (local to host 2fe75bc5)
UUID : 30c9acd6:d608709f:a6ed80df:cb332b59
Events : 3340Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 51 1 active sync /dev/sdd3Do I understand correctly that it has created a new volume on the new disk?
I have already tried to reboot and to remove and reinstall the disk, but with no result.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!