NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
bluewomble
Mar 13, 2011Aspirant
Disk Failure Detected...
I've recently purchased a ReadyNAS Ultra 6 along with 6 2 Tb Seagate ST2000DL003 disks (which are on the HCL).
I've set up the NAS in a dual redundancy X-RAID2 configuration and have starting copying all the data over the network from my old ReadyNAS NV to the new ultra 6...
About half way through copying (on 6th March), I got a disk failure detected (on channel 4). I powered down the NAS took the disk out and reinserted it, assuming there might be some kind of connection problem... When I powered back up it detected the disk, tested it and started to resync (which takes about 24 hours)... I left it alone while it did that and then it seemed to be ok, so I started copying the rest of my data across. There is nothing in the SMART+ log for disk 4 which would indicate that there was ever a problem with that disk.
A few minutes ago, I just got another disk failure (this time on channel 2). Exactly the same story... powered down and then back up again, the disk comes back to life and the NAS starts testing it and resyncing it... again, there is nothing in the SMART+ log for disk 2 which indicates (to me at least) that there was ever a problem.
After both occasions, I've downloaded the system logs from the NAS, but I'm not sure what to do with them. Is there something in the log which would show what exactly failed?
Any idea what's going on here? Have I got a couple of dud disks which need to be sent back, or is there something else going on? If they are dud, I'd need to be able to prove to the retailer that they were... the only indication I have of a problem is that the ReadyNAS ultra 6 _said_ that they had failed... but they both seem to be working fine now.
Thanks,
Ash.
P.S. Here's the SMART+ report from disk 2:
This looks like the appropriate section of system.log for the failure which occurred today:
and here is what looks like the relevant part of the log from the failure on 6th March:
I've set up the NAS in a dual redundancy X-RAID2 configuration and have starting copying all the data over the network from my old ReadyNAS NV to the new ultra 6...
About half way through copying (on 6th March), I got a disk failure detected (on channel 4). I powered down the NAS took the disk out and reinserted it, assuming there might be some kind of connection problem... When I powered back up it detected the disk, tested it and started to resync (which takes about 24 hours)... I left it alone while it did that and then it seemed to be ok, so I started copying the rest of my data across. There is nothing in the SMART+ log for disk 4 which would indicate that there was ever a problem with that disk.
A few minutes ago, I just got another disk failure (this time on channel 2). Exactly the same story... powered down and then back up again, the disk comes back to life and the NAS starts testing it and resyncing it... again, there is nothing in the SMART+ log for disk 2 which indicates (to me at least) that there was ever a problem.
After both occasions, I've downloaded the system logs from the NAS, but I'm not sure what to do with them. Is there something in the log which would show what exactly failed?
Any idea what's going on here? Have I got a couple of dud disks which need to be sent back, or is there something else going on? If they are dud, I'd need to be able to prove to the retailer that they were... the only indication I have of a problem is that the ReadyNAS ultra 6 _said_ that they had failed... but they both seem to be working fine now.
Thanks,
Ash.
P.S. Here's the SMART+ report from disk 2:
SMART Information for Disk 2
Model: ST2000DL003-9VT166
Serial: 5YD2196G
Firmware: CC32
SMART Attribute
Spin Up Time 0
Start Stop Count 12
Reallocated Sector Count 0
Power On Hours 151
Spin Retry Count 0
Power Cycle Count 12
Reported Uncorrect 0
High Fly Writes 0
Airflow Temperature Cel 42
G-Sense Error Rate 0
Power-Off Retract Count 6
Load Cycle Count 12
Temperature Celsius 42
Current Pending Sector 0
Offline Uncorrectable 0
UDMA CRC Error Count 0
Head Flying Hours 221474283585687
ATA Error Count 0
This looks like the appropriate section of system.log for the failure which occurred today:
Mar 13 20:00:09 ultranas ntpdate[11162]: step time server 194.238.48.3 offset 0.310812 sec
Mar 13 20:16:27 ultranas kernel: ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Mar 13 20:16:27 ultranas kernel: ata2.00: failed command: FLUSH CACHE EXT
Mar 13 20:16:27 ultranas kernel: ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
Mar 13 20:16:27 ultranas kernel: res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout)
Mar 13 20:16:27 ultranas kernel: ata2.00: status: { DRDY }
Mar 13 20:16:27 ultranas kernel: ata2: hard resetting link
Mar 13 20:16:33 ultranas kernel: ata2: link is slow to respond, please be patient (ready=0)
Mar 13 20:16:37 ultranas kernel: ata2: COMRESET failed (errno=-16)
Mar 13 20:16:37 ultranas kernel: ata2: hard resetting link
Mar 13 20:16:43 ultranas kernel: ata2: link is slow to respond, please be patient (ready=0)
Mar 13 20:16:47 ultranas kernel: ata2: COMRESET failed (errno=-16)
Mar 13 20:16:47 ultranas kernel: ata2: hard resetting link
Mar 13 20:16:53 ultranas kernel: ata2: link is slow to respond, please be patient (ready=0)
Mar 13 20:17:23 ultranas kernel: ata2: COMRESET failed (errno=-16)
Mar 13 20:17:23 ultranas kernel: ata2: limiting SATA link speed to 1.5 Gbps
Mar 13 20:17:23 ultranas kernel: ata2: hard resetting link
Mar 13 20:17:28 ultranas kernel: ata2: COMRESET failed (errno=-16)
Mar 13 20:17:28 ultranas kernel: ata2: reset failed, giving up
Mar 13 20:17:28 ultranas kernel: ata2.00: disabled
Mar 13 20:17:28 ultranas kernel: ata2.00: device reported invalid CHS sector 0
Mar 13 20:17:28 ultranas kernel: ata2: EH complete
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 0
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Unhandled error code
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 90 00 50 00 00 02 00
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 9437264
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 9437264
Mar 13 20:17:28 ultranas kernel: **************** super written barrier kludge on md2: error==IO 0xfffffffb
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Unhandled error code
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 00 00 48 00 00 02 00
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 72
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 72
Mar 13 20:17:28 ultranas kernel: **************** super written barrier kludge on md0: error==IO 0xfffffffb
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Unhandled error code
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] CDB: Read(10): 28 00 00 51 8f 30 00 00 28 00
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 5345072
Mar 13 20:17:28 ultranas kernel: raid1: sdb1: rescheduling sector 5342960
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Unhandled error code
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 90 00 50 00 00 02 00
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 9437264
Mar 13 20:17:28 ultranas kernel: md: super_written gets error=-5, uptodate=0
Mar 13 20:17:28 ultranas kernel: raid5: Disk failure on sdb5, disabling device.
Mar 13 20:17:28 ultranas kernel: raid5: Operation continuing on 5 devices.
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Unhandled error code
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 13 20:17:28 ultranas kernel: sd 1:0:0:0: [sdb] CDB: Write(10): 2a 00 00 00 00 48 00 00 02 00
Mar 13 20:17:28 ultranas kernel: end_request: I/O error, dev sdb, sector 72
Mar 13 20:17:28 ultranas kernel: md: super_written gets error=-5, uptodate=0
Mar 13 20:17:28 ultranas kernel: raid1: Disk failure on sdb1, disabling device.
Mar 13 20:17:28 ultranas kernel: raid1: Operation continuing on 5 devices.
Mar 13 20:17:28 ultranas kernel: RAID5 conf printout:
Mar 13 20:17:28 ultranas kernel: --- rd:6 wd:5
Mar 13 20:17:28 ultranas kernel: disk 0, o:1, dev:sda5
Mar 13 20:17:28 ultranas kernel: disk 1, o:0, dev:sdb5
Mar 13 20:17:28 ultranas kernel: disk 2, o:1, dev:sdc5
Mar 13 20:17:28 ultranas kernel: disk 3, o:1, dev:sdd5
Mar 13 20:17:28 ultranas kernel: disk 4, o:1, dev:sde5
Mar 13 20:17:28 ultranas kernel: disk 5, o:1, dev:sdf5
Mar 13 20:17:28 ultranas kernel: RAID5 conf printout:
Mar 13 20:17:28 ultranas kernel: --- rd:6 wd:5
Mar 13 20:17:28 ultranas kernel: disk 0, o:1, dev:sda5
Mar 13 20:17:28 ultranas kernel: disk 2, o:1, dev:sdc5
Mar 13 20:17:28 ultranas kernel: disk 3, o:1, dev:sdd5
Mar 13 20:17:28 ultranas kernel: disk 4, o:1, dev:sde5
Mar 13 20:17:28 ultranas kernel: disk 5, o:1, dev:sdf5
Mar 13 20:17:28 ultranas kernel: RAID1 conf printout:
Mar 13 20:17:28 ultranas kernel: --- wd:5 rd:6
Mar 13 20:17:28 ultranas kernel: disk 0, wo:0, o:1, dev:sda1
Mar 13 20:17:28 ultranas kernel: disk 1, wo:1, o:0, dev:sdb1
Mar 13 20:17:28 ultranas kernel: disk 2, wo:0, o:1, dev:sdc1
Mar 13 20:17:28 ultranas kernel: disk 3, wo:0, o:1, dev:sdd1
Mar 13 20:17:28 ultranas kernel: disk 4, wo:0, o:1, dev:sde1
Mar 13 20:17:28 ultranas kernel: disk 5, wo:0, o:1, dev:sdf1
Mar 13 20:17:28 ultranas kernel: RAID1 conf printout:
Mar 13 20:17:28 ultranas kernel: --- wd:5 rd:6
Mar 13 20:17:28 ultranas kernel: disk 0, wo:0, o:1, dev:sda1
Mar 13 20:17:28 ultranas kernel: disk 2, wo:0, o:1, dev:sdc1
Mar 13 20:17:28 ultranas kernel: disk 3, wo:0, o:1, dev:sdd1
Mar 13 20:17:28 ultranas kernel: disk 4, wo:0, o:1, dev:sde1
Mar 13 20:17:28 ultranas kernel: disk 5, wo:0, o:1, dev:sdf1
Mar 13 20:17:28 ultranas kernel: raid1: sdf1: redirecting sector 5342960 to another mirror
Mar 13 20:17:32 ultranas RAIDiator: Disk failure detected.\n\nIf the failed disk is used in a RAID level 1, 5, or X-RAID volume, please note that volume is now unprotected, and an additional disk failure may render that volume dead. If this disk is a part of a RAID 6 volume, your volume is still protected if this is your first failure. A 2nd disk failure will make your volume unprotected. It is recommended that you replace the failed disk as soon as possible to maintain optimal protection of your volume.\n\n[Sun Mar 13 20:17:29 WET 2011]
Mar 13 20:20:24 ultranas kernel: program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
and here is what looks like the relevant part of the log from the failure on 6th March:
Mar 6 16:00:07 nas-EA-A6-42 ntpdate[12452]: step time server 62.84.188.34 offset -0.103568 sec
Mar 6 18:48:21 nas-EA-A6-42 kernel: ata4.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Mar 6 18:48:22 nas-EA-A6-42 kernel: ata4.00: failed command: FLUSH CACHE EXT
Mar 6 18:48:22 nas-EA-A6-42 kernel: ata4.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
Mar 6 18:48:22 nas-EA-A6-42 kernel: res 40/00:00:b8:f7:0e/00:00:00:00:00/40 Emask 0x4 (timeout)
Mar 6 18:48:22 nas-EA-A6-42 kernel: ata4.00: status: { DRDY }
Mar 6 18:48:22 nas-EA-A6-42 kernel: ata4: hard resetting link
Mar 6 18:48:27 nas-EA-A6-42 kernel: ata4: link is slow to respond, please be patient (ready=0)
Mar 6 18:48:32 nas-EA-A6-42 kernel: ata4: COMRESET failed (errno=-16)
Mar 6 18:48:32 nas-EA-A6-42 kernel: ata4: hard resetting link
Mar 6 18:48:37 nas-EA-A6-42 kernel: ata4: link is slow to respond, please be patient (ready=0)
Mar 6 18:48:42 nas-EA-A6-42 kernel: ata4: COMRESET failed (errno=-16)
Mar 6 18:48:42 nas-EA-A6-42 kernel: ata4: hard resetting link
Mar 6 18:48:47 nas-EA-A6-42 kernel: ata4: link is slow to respond, please be patient (ready=0)
Mar 6 18:49:17 nas-EA-A6-42 kernel: ata4: COMRESET failed (errno=-16)
Mar 6 18:49:17 nas-EA-A6-42 kernel: ata4: limiting SATA link speed to 1.5 Gbps
Mar 6 18:49:17 nas-EA-A6-42 kernel: ata4: hard resetting link
Mar 6 18:49:22 nas-EA-A6-42 kernel: ata4: COMRESET failed (errno=-16)
Mar 6 18:49:22 nas-EA-A6-42 kernel: ata4: reset failed, giving up
Mar 6 18:49:22 nas-EA-A6-42 kernel: ata4.00: disabled
Mar 6 18:49:22 nas-EA-A6-42 kernel: ata4: EH complete
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10): 2a 00 00 00 00 48 00 00 02 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 72
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 72
Mar 6 18:49:22 nas-EA-A6-42 kernel: **************** super written barrier kludge on md0: error==IO 0xfffffffb
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10): 2a 00 00 93 9e 80 00 00 08 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9674368
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid5: Disk failure on sdd5, disabling device.
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid5: Operation continuing on 5 devices.
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10): 2a 00 34 c5 68 48 00 00 80 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 885352520
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10): 2a 00 34 c6 f0 c8 00 00 50 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 885453000
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 91 28 c8 00 00 38 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9513160
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 91 29 10 00 00 10 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9513232
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 91 29 28 00 00 10 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9513256
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 91 29 40 00 00 08 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9513280
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 93 88 48 00 00 08 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9668680
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 93 a1 90 00 00 10 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 9675152
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 34 c5 38 48 00 00 08 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 885340232
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 34 c5 64 48 00 00 80 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 885351496
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 34 c6 f1 18 00 00 30 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 885453080
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10): 2a 00 00 80 00 48 00 00 02 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 8388680
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 8388680
Mar 6 18:49:22 nas-EA-A6-42 kernel: **************** super written barrier kludge on md1: error==IO 0xfffffffb
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Read(10): 28 00 00 31 8d 58 00 00 28 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 3247448
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid1: sdd1: rescheduling sector 3245336
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB:
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Unhandled error code
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Mar 6 18:49:22 nas-EA-A6-42 kernel: sd 3:0:0:0: [sdd] CDB: Write(10)Write(10): 2a 00 00 00 00 48 00 00 02 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 72
Mar 6 18:49:22 nas-EA-A6-42 kernel: :md: super_written gets error=-5, uptodate=0
Mar 6 18:49:22 nas-EA-A6-42 kernel: 2a
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid1: Disk failure on sdd1, disabling device.
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid1: Operation continuing on 5 devices.
Mar 6 18:49:22 nas-EA-A6-42 kernel: 00 00 80 00 48 00 00 02 00
Mar 6 18:49:22 nas-EA-A6-42 kernel: end_request: I/O error, dev sdd, sector 8388680
Mar 6 18:49:22 nas-EA-A6-42 kernel: md: super_written gets error=-5, uptodate=0
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid5: Disk failure on sdd2, disabling device.
Mar 6 18:49:22 nas-EA-A6-42 kernel: raid5: Operation continuing on 5 devices.
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID1 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- wd:5 rd:6
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, wo:0, o:1, dev:sda1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, wo:0, o:1, dev:sdb1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, wo:0, o:1, dev:sdc1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 3, wo:1, o:0, dev:sdd1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, wo:0, o:1, dev:sde1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, wo:0, o:1, dev:sdf1
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID1 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- wd:5 rd:6
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, wo:0, o:1, dev:sda1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, wo:0, o:1, dev:sdb1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, wo:0, o:1, dev:sdc1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, wo:0, o:1, dev:sde1
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, wo:0, o:1, dev:sdf1
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID5 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- rd:6 wd:5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, o:1, dev:sda5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, o:1, dev:sdb5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, o:1, dev:sdc5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 3, o:0, dev:sdd5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, o:1, dev:sde5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, o:1, dev:sdf5
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID5 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- rd:6 wd:5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, o:1, dev:sda5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, o:1, dev:sdb5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, o:1, dev:sdc5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, o:1, dev:sde5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, o:1, dev:sdf5
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID5 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- rd:6 wd:5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, o:1, dev:sda2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, o:1, dev:sdb2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, o:1, dev:sdc2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 3, o:0, dev:sdd2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, o:1, dev:sde2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, o:1, dev:sdf2
Mar 6 18:49:23 nas-EA-A6-42 kernel: raid1: sdb1: redirecting sector 3245336 to another mirror
Mar 6 18:49:23 nas-EA-A6-42 kernel: RAID5 conf printout:
Mar 6 18:49:23 nas-EA-A6-42 kernel: --- rd:6 wd:5
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 0, o:1, dev:sda2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 1, o:1, dev:sdb2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 2, o:1, dev:sdc2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 4, o:1, dev:sde2
Mar 6 18:49:23 nas-EA-A6-42 kernel: disk 5, o:1, dev:sdf2
Mar 6 18:49:53 nas-EA-A6-42 RAIDiator: Disk failure detected.\n\nIf the failed disk is used in a RAID level 1, 5, or X-RAID volume, please note that volume is now unprotected, and an additional disk failure may render that volume dead. If this disk is a part of a RAID 6 volume, your volume is still protected if this is your first failure. A 2nd disk failure will make your volume unprotected. It is recommended that you replace the failed disk as soon as possible to maintain optimal protection of your volume.\n\n[Sun Mar 6 18:49:51 WET 2011]
144 Replies
Replies have been turned off for this discussion
- thestumperAspirantCount me in...
I've experienced all the problems in this thread and have the same log files to show for it. I've been through all the tetsing tools and everything says the drives are fine. I talked to tech support last night and the guy was really nice, but not very helpful (not his fault). They offered to send me a new Ultra 4+, but I fail to see how this is going to solve the problem. I'm running OS X Snow Leopard on two systems in my home, and both are leveraging Time Machine to the NAS.
I'm pretty frustrated with this. I have an official case now, but this has been going on for almost 3 months. I don't think I should have to cough up additional cash to buy new drives given that these Seagates were listed in the official HCL. I see one of three options as being acceptable:
1) Fix this issue (preferred)
2) Credit me for the drives
3) Credit me for the Ultra 4+
I don't have much confidence in (1) actually happening any time soon, but that would be the best option. Unfortunately, I can't wait around for a protracted period of time hoping that Netgear figures this out while my data is at risk. (2) would be the next best solution - they said the drives were good, I bought them, and they were wrong. (3) is an option of last resort. I like my Ultra 4+; when it works it's a great piece of kit. I don't want to have to consider another vendor (QNAP, etc.) because I'm not sure these drives are supported there either (but I will be checking).
Anyway, I have to contact support again, and I'll be referencing this thread. Beyond that I'm not sure what my options are beyond crossing Netgear off the list of vendors I do business with. That would suck, because I've always had really good luck with just about everything I've ever bought (or recommended) from them.
-Eric - PianAspirantExactly the same problem here.
But - in a rather ironic twist - my latest failure happened when a L3 engineer was controlling my mac using teamviewer! I had a Time Machine backup running and was having afp issues which he was examining when, lo and behold, up popped the disk failure popup.
He took the logs and has raised to Engineering.
In the mean time I have - once again - removed and reseated the disk, and am currently busy re-syncing. - bokvastAspirantOnce again my NAS reported a discfailure. I reinserted the drive and its now resyncing... What should I do??
NETGEAR!?!
I NEED HELP!!! - ferg1GuideI've just had a disc failure. This makes 4 "real" failures, and 3 where the disc (ST2000DL003) is reported to be failed, but starts working after removing and reinserting (and rebooting).
- ferg1GuideLooks like I lied. I've just had a 8th failure. The Readynas literally reported a disc failure just now as I clicked SUBMIT for that post thread. I'm gonna open a support case.
- thestumperAspirantUpdate:
It has been almost three weeks since I have opened a case. It has been a frustrating experience so far. I have "factory defaulted" the unit once, and a week or so later, I had another disk failure. I think I'm over ten at this point. After the last failure, the "engineers" looked at the unit and said they could find no problems, and nothing in the logs. I gave tech support remote access and booted the unit into some sort of maintenance/debug mode. Oddly enough, they wanted Telnet (port 23) forwarded, and not SSH (22) which I found strange. Perhaps "debug mode" only has hooks for Telnet?
Anyway, I now have a new firmware to try - it appears to be Beta code: 4.2.20-T15. This was the recommendation given to me by engineering. I applied the new firmware about 20 minutes ago, so now it's time to sit and wait.
I will commend Tech Support on their responsiveness; I have had difficulty conveying/re-conveying information to them at times, and so far the answers I have received have not been too helpful, but they have been polite and quick to respond. It feels like they are just walking me through a script for the most part, but we will see how the new firmware performs. The next step is an RMA, which will almost certainly NOT help, but again, it's part of the process for now.
I will keep everyone updated as this evolves. - turbogizzmoAspirantAdd me to the list. Seagate RMA'd a few drives and Netgear RMA'd the Pro. I figured out the power off, reseat drive and power on trick but its starting to become annoying because I run a ESX server off this system. Not sure if I should wait for Seagate/Netgear to sort this out or if i should start sourcing other solutions. (My old Buffalo Terastation Pro has run flawlessly, just sloooow)
Sad but glad to find this thread...... - PapaBear1ApprenticeMaybe Netgear needs to take these drives off the HCL. I know that even as a long time Seagate user (very satisified) I'm staying about as far away from these drives as I can and am advising my friends to do so as well, especially if they use Macs.
- opt2boutAspirantI have a RMA'd chasis, same drive ST2000DL003 which I only bought because of the HCL. I have two in this unit, and they will alernate failed status randomly whenever I reflash the unit. (note that just removing and adding the drive back into the chasis doesn't do it for me, I have to power down and up before it adds the "failed" drive back into the array. Then after a couple of Time Machine backups, one of the new ST2000DL003 drives fail.
I too am going to try the latest beta. I shied away from this because support won't support betas, but since 4.2.19 doesn't fix the problem, and I have an open case like the rest of you, what do I have to lose??!?
I did get an update from the support engineer back on 11/10 with a link to Seagate's forums ????
http://forums.seagate.com/t5/Barracuda- ... 154/page/2 - thestumperAspirantIn my case, Netgear actually recommended the beta code. It didn't help - I have yet another failure with it. I've noted my open case, so now it's in Negear's court. As far as I am concerned, they need to fix this (quickly), refund my purchase price for the NAS, or refund my purchase price for the disks. I have a unit I can't trust; to be honest I can barely use it at this point, and I bought the drives the Netgear themselves said were compatible. I have a copy of the old list in case they change it, which they probably should (at least make a note). This has been happening since August for me, and if we don't get to a resolution in a couple weeks, we're headed to small claims court. I hate to do that, but I don't think they have a clue how to fix the issue. I love the thing when it works, but it is useless to me at the moment and has been for quite some time.
Keep the good posts coming - it's nice to know I'm not alone in this :)
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!