NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
gpaolo
Jan 13, 2020Luminary
Disk replacement - back sync and again degraded after reboot
Hi all, I am having some trouble with my RN524. I have 2x1TB and 2x4TB disks, each couple in RAID1 on my NAS. A few days ago, while I was away (bacause it always happens while I am away...) one of t...
- Jan 26, 2020
I have returned finally home and swapped the drives... and it works fine.
Oh well, problem solved, next time use more brain...
Thank you everyone, sorry for the mistake!
Sandshark
Jan 14, 2020Sensei
It sounds like your NAS is undecided about that drive's status. Prevention of formatting and inability to expand using it or set it as a spare would normally mean the drive is a part of the volume. But the display clearly shows it's not.
Download the log zip file and look at mdstat.log and see if it shows the drive as a part of the volume. Paste it in a message here if you need help interpreting. Maybe the GUI and the underlying Linux system are out of sync.
The next thing you should do is insure that your backup is up to date. When the NAS gets into a "grey area", as yours seems to have done, volume loss becomes a bigger risk.
If the log says the drive is a part of the volume, then a reboot might clear things up. If it's not, removing and re-installing (with power on) may change things. If you have the ability to test the drive with vendor tools on a PC while it's out, that's also a good step.
gpaolo
Jan 15, 2020Luminary
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
Resuming, thank you both of course for your suggestions. I have downloaded the logs, this is the content of mdstat:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md126 : active raid1 sda3[0] sdb3[1]
971912832 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdd3[1]
3902168832 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sda1[0] sdd1[3] sdb1[1]
4190208 blocks super 1.2 [3/3] [UUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Fri May 18 21:43:37 2018
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Jan 13 11:43:05 2020
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 2fe75bc5:0 (local to host 2fe75bc5)
UUID : ec490464:fbbd3e14:c6a1b5d7:03ec6667
Events : 32940
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
3 8 49 2 active sync /dev/sdd1
/dev/md/Volume1TB-0:
Version : 1.2
Creation Time : Fri May 18 21:57:08 2018
Raid Level : raid1
Array Size : 971912832 (926.89 GiB 995.24 GB)
Used Dev Size : 971912832 (926.89 GiB 995.24 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 14 22:18:07 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 2fe75bc5:Volume1TB-0 (local to host 2fe75bc5)
UUID : 84a155e1:4166905b:af7b6c5d:15ad37ba
Events : 47
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
/dev/md/Volume4TB-0:
Version : 1.2
Creation Time : Fri May 18 21:57:35 2018
Raid Level : raid1
Array Size : 3902168832 (3721.40 GiB 3995.82 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Tue Jan 14 12:06:24 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 2fe75bc5:Volume4TB-0 (local to host 2fe75bc5)
UUID : 30c9acd6:d608709f:a6ed80df:cb332b59
Events : 3340
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 51 1 active sync /dev/sdd3I'm not sure if I understand it correctly: does it say that the new disk has been assigned to a new volume?
I have already tried in the past days to reboot and to remove and reinstall the disk, but nothing changed. I guess the only thing I can do now is when I get back to remove the disk, format it on a PC and put it back?
- StephenBJan 16, 2020Guru - Experienced User
gpaolo wrote:
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
There is an automatic spam filter that caught your messages - I released them.
gpaolo wrote:
Oh that's great, my reply has disappeared...
Ok I don't know what happened, sorry.
Resuming, thank you both of course for your suggestions. I have downloaded the logs, this is the content of mdstat:
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md126 : active raid1 sda3[0] sdb3[1] 971912832 blocks super 1.2 [2/2] [UU] md127 : active raid1 sdd3[1] 3902168832 blocks super 1.2 [2/1] [_U] md0 : active raid1 sda1[0] sdd1[3] sdb1[1] 4190208 blocks super 1.2 [3/3] [UUU]Let's start with this.
- md1 is the swap partition, and it uses all four disks.
- md0 is the OS partition, and that should also hold 4 disks. It is missing sdc (which is normally the disk in bay 3, but not always). But md0 isn't degraded - the system never added sdc to the array.
- md126 is the 1 TB data volume, and it looks fine (sda and sdb are both in it).
- md127 is the 4 TB volume, and it is reported as degraded (sdc and sdd are both supposed to be in it, but sdc isn't).
It is weird that the system never added sdc to the array.
Are you seeing any disk errors for either sdc or sdd reported in system.log and kernel.log? (PM me if the spam filter kicks in again).
- gpaoloJan 16, 2020Luminary
Thanks, I was not aware of the spam filter...
So, yes, from system.log I get
Jan 13 11:43:05 NAS4-CASA-GP mdadm[4402]: Fail event detected on md device /dev/md0, component device /dev/sdc1
but nothing else about sdc
Aboud sdd, what I see seems to be unrelated to the disk (this is the result of searching for "sdd"):
Line 287: Jan 13 09:28:39 NAS4-CASA-GP wsdd2[4449]: Terminated received. Line 288: Jan 13 09:28:39 NAS4-CASA-GP wsdd2[4449]: terminating. Line 323: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: starting. Line 342: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v4: wsd_send_soap_msg: send Line 342: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v4: wsd_send_soap_msg: send Line 350: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 350: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 351: Jan 13 09:30:57 NAS4-CASA-GP wsdd2[4460]: error: llmnr-mcast-v4: open_ep: IP_ADD_MEMBERSHIP Line 390: Jan 13 09:31:01 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 390: Jan 13 09:31:01 NAS4-CASA-GP wsdd2[4460]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 493: Jan 13 09:36:34 NAS4-CASA-GP wsdd2[4460]: Terminated received. Line 494: Jan 13 09:36:34 NAS4-CASA-GP wsdd2[4460]: terminating. Line 526: Jan 13 09:37:33 NAS4-CASA-GP wsdd2[4381]: starting. Line 571: Jan 13 09:37:34 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v4: wsd_send_soap_msg: send Line 571: Jan 13 09:37:34 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v4: wsd_send_soap_msg: send Line 572: Jan 13 09:37:34 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 572: Jan 13 09:37:34 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 573: Jan 13 09:37:34 NAS4-CASA-GP wsdd2[4381]: error: llmnr-mcast-v4: open_ep: IP_ADD_MEMBERSHIP Line 621: Jan 13 09:37:37 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v6: wsd_send_soap_msg: send Line 621: Jan 13 09:37:37 NAS4-CASA-GP wsdd2[4381]: error: wsdd-mcast-v6: wsd_send_soap_msg: send
On kernel.log it looks like there is lot of repetition of this error sequence
Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: status: { DRDY ERR } Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: error: { UNC } Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: configured for UDMA/133 Jan 14 22:18:40 NAS4-CASA-GP kernel: sd 2:0:0:0: [sdc] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jan 14 22:18:40 NAS4-CASA-GP kernel: sd 2:0:0:0: [sdc] tag#3 Sense Key : Medium Error [current] [descriptor] Jan 14 22:18:40 NAS4-CASA-GP kernel: sd 2:0:0:0: [sdc] tag#3 Add. Sense: Unrecovered read error - auto reallocate failed Jan 14 22:18:40 NAS4-CASA-GP kernel: sd 2:0:0:0: [sdc] tag#3 CDB: Read(16) 88 00 00 00 00 00 00 90 00 48 00 00 00 08 00 00 Jan 14 22:18:40 NAS4-CASA-GP kernel: blk_update_request: I/O error, dev sdc, sector 9437257 Jan 14 22:18:40 NAS4-CASA-GP kernel: Buffer I/O error on dev sdc3, logical block 1, async page read Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3: EH complete Jan 14 22:18:40 NAS4-CASA-GP kernel: do_marvell_9170_recover: ignoring PCI device (8086:8c02) at PCI#0 Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: exception Emask 0x0 SAct 0x400000 SErr 0x0 action 0x0 Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: irq_stat 0x40000008 Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: failed command: READ FPDMA QUEUED Jan 14 22:18:40 NAS4-CASA-GP kernel: ata3.00: cmd 60/08:b0:48:00:90/00:00:00:00:00/40 tag 22 ncq 4096 in res 41/40:00:49:00:90/00:00:00:00:00/40 Emask 0x409 (media error) <F>with no mention of sdd.
- SandsharkJan 16, 2020Sensei
StephenB wrote:It is weird that the system never added sdc to the array.
I've seen this before in some of my experiments, then had it trigger a re-sync that adds it when any other drive is added/removed. I do use some questionable drives in some of my experiments, so that could be a part of the equation, but I've not figured out the driving factor.(s)
- gpaoloJan 16, 2020LuminaryI'm using WD Red NAS HDD, it should not be due to the drives being questionable, I hope...
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!