- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Re: 2nd Failed Drive
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2nd Failed Drive
Hello,
Firstly, my appologies if the 'selected location' is incorrect.
Approx 1 month ago I replaced a failed 4TB drive in my RN202 with a 6TB. Approx 2 weeks later I get an error saying the othe drive has failed. I replaced one of the drives under warrant a while back.
Is this normal for the drives to fail l;ike that?, Can I test it to check if its salvageable or should I replace that drive also?
Thanks in advance for any assistance.
See following mdstat.log
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sdb2[1] sda2[0]
523264 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sda3[2](F) sdb3[3]
3902168832 blocks super 1.2 [2/1] [_U]
md0 : active raid1 sda1[2](F) sdb1[1]
4190208 blocks super 1.2 [2/1] [_U]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Wed Sep 4 21:25:50 2019
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Aug 15 06:58:29 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : unknown
Name : 11654714:0 (local to host 11654714)
UUID : 99580106:e27de495:c57ef5da:eb68d014
Events : 150242
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
2 8 1 - faulty /dev/sda1
/dev/md/data-0:
Version : 1.2
Creation Time : Wed Sep 4 21:26:24 2019
Raid Level : raid1
Array Size : 3902168832 (3721.40 GiB 3995.82 GB)
Used Dev Size : 3902168832 (3721.40 GiB 3995.82 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sat Aug 15 06:54:00 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : unknown
Name : 11654714:data-0 (local to host 11654714)
UUID : 03566216:f946ec04:670d8013:17b8d150
Events : 9062
Number Major Minor RaidDevice State
- 0 0 0 removed
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 2nd Failed Drive
Well, they certainly can fail in rapid succession.
Can you look in disk_info.log, and post the stats for the drive 1 (sda) here?
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 2nd Failed Drive
Thanks StevenB,
Contents of disk_info.log:
Device: sda
Controller: 0
Channel: 0
Model: WDC_WD40EFRX-68WT0N0
Serial: WD-WCC4E6VVT159
Firmware: 82.00A82
Class: SATA
Sectors: 7814037168
Pool: data
PoolType: RAID 1
PoolState: 3
PoolHostId: 11654714
Health data
ATA Error Count: 0
Device: sdb
Controller: 0
Channel: 1
Model: WDC WD60EFAX-68SHWN0
Serial: WD-WX32D20HK1DA
Firmware: 82.00A82W
Class: SATA
RPM: 5400
Sectors: 11721045168
Pool: data
PoolType: RAID 1
PoolState: 3
PoolHostId: 11654714
Health data
ATA Error Count: 0
Reallocated Sectors: 0
Reallocation Events: 0
Spin Retry Count: 0
Current Pending Sector Count: 0
Uncorrectable Sector Count: 0
Temperature: 18
Start/Stop Count: 77
Power-On Hours: 495
Power Cycle Count: 6
Load Cycle Count: 74
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 2nd Failed Drive
There should be more information for the first disk. Maybe try powering down the NAS, and connect to a Windows PC. Then test it with WD's lifeguard.
The second disk is an SMR disk, so it will can be very slow to sync. When you upgrade the second one, I suggest getting a WD60EFRX instead.
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Re: 2nd Failed Drive
Thanks very much StevenB, I'll give those suggestions a try