NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Jophus's avatar
Jophus
Luminary
Dec 16, 2016

Ultra 6 6.4.2 disk fail issues

Hey All.  Wanted to relay a story to you all around how my Ultra 6 on 6.4.2 suffered a disk failure, rebuilt, locked up, then worked perfectly...

 

Disk in questions was a 2GB Seagate:

Dec 14 13:38:22 Sextuple kernel: ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
Dec 14 13:38:22 Sextuple kernel: ata6.00: failed command: FLUSH CACHE EXT
Dec 14 13:38:22 Sextuple kernel: ata6.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 19
Dec 14 13:38:22 Sextuple kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Dec 14 13:38:22 Sextuple kernel: ata6.00: status: { DRDY }
Dec 14 13:38:22 Sextuple kernel: ata6: hard resetting link
Dec 14 13:38:27 Sextuple kernel: ata6: link is slow to respond, please be patient (ready=0)
Dec 14 13:38:32 Sextuple kernel: ata6: COMRESET failed (errno=-16)

 

then:

Dec 14 13:39:22 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 0
Dec 14 13:39:22 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 9437256
Dec 14 13:39:22 Sextuple kernel: md: super_written gets error=-5, uptodate=0
Dec 14 13:39:22 Sextuple kernel: md/raid:md127: Disk failure on sdf3, disabling device.
Dec 14 13:39:22 Sextuple kernel: md/raid:md127: Operation continuing on 5 devices.
Dec 14 13:39:22 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 72
Dec 14 13:39:22 Sextuple kernel: md: super_written gets error=-5, uptodate=0
Dec 14 13:39:22 Sextuple kernel: md/raid1:md0: Disk failure on sdf1, disabling device.
Dec 14 13:39:22 Sextuple kernel: md/raid1:md0: Operation continuing on 5 devices.
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#23 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#23 CDB: Read(10) 28 00 5e 7d 7a 40 00 00 b0 00
Dec 14 13:39:22 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 1585281600
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#3 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#3 CDB: Read(10) 28 00 5e 7d 83 40 00 00 80 00
Dec 14 13:39:22 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 1585283904
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#6 FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
Dec 14 13:39:22 Sextuple kernel: sd 5:0:0:0: [sdf] tag#6 CDB: Read(10) 28 00 94 3d 88 c0 00 00 40 00

 

then:

Dec 14 13:50:24 Sextuple kernel: blk_update_request: 13 callbacks suppressed
Dec 14 13:50:24 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 8388680
Dec 14 13:50:24 Sextuple kernel: md: super_written gets error=-5, uptodate=0
Dec 14 13:50:24 Sextuple kernel: md/raid:md1: Disk failure on sdf2, disabling device.
Dec 14 13:50:24 Sextuple kernel: md/raid:md1: Operation continuing on 5 devices.

 

lastly

Dec 14 13:58:02 Sextuple kernel: sd 5:0:0:0: [sdf] tag#7 CDB: Read(10) 28 00 e8 e0 88 a8 00 00 08 00
Dec 14 13:58:02 Sextuple kernel: blk_update_request: I/O error, dev sdf, sector 3907029160
Dec 14 13:58:02 Sextuple kernel: Buffer I/O error on dev sdf, logical block 488378645, async page read

 

So, dead drive, volume showed as "DEGRADED" and a small bit of panic sets in (I have a full cloud AND local RND102 backup of data).

 

Anyway, got home, removed the drive, and reinsterted/reseated it.

OS6 pickup the drive straight away - it appeared grey and needed a format, so I did

Rebuild starts

12 hours later... everything is rebuilt and back to the way it was... except:

Hard disk temps were -1

system temp was 60+

Frontview still showed a red/white "do not enter" cirlce on disk 6

Couldn't shut down

Couldn't reboot

tried from a terminal, which successfully killed SSH daemon but left the box running.

Frontview still showed "DEGRADED"

 

Pulled the power cord, and plugged back in.

 

All is back to normal

 

2 days of mucking about for absolutely nothing.  NOTE: please ensure you have a backup of your data when a ReadyNAS decides to lose a drive.

 

 

 

 

 

 

 

6 Replies

Replies have been turned off for this discussion
  • Preferably before the ReadyNAS decides to lose a drive :smileylol::smileylol:

     

    • StephenB's avatar
      StephenB
      Guru - Experienced User

      I'd run a disk test from the admin ui.

      • Jophus's avatar
        Jophus
        Luminary

        Disk test took 5hrs 55 minutes on XRAID2 (dual redundancy) for 5x1.5TB+1xTB (6TB volume - 5.44TB useable) and found....

         

        NO ISSUES!!! All disks healthy.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More