× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: Unstable Ultra 6 (Case Number: 21215060)

FireWalkerX
Aspirant

Unstable Ultra 6 (Case Number: 21215060)

So i bought a new Readynas Ultra 6, about 22 days ago, it will freeze once in a while 3 times a week.

change the networking cable
ive done a memory test (ran it for 12 hours) no errors
ive updated to latest version (.23), and right now done a factory reset, and waiting for it to resync, then i looked at the logs and found the following in dmesg:

sda: unknown partition table
sda: sda1 sda2 sda3
sdb: unknown partition table
sdb: sdb1 sdb2 sdb3
sdc: unknown partition table
sdc: sdc1 sdc2 sdc3
sdd: unknown partition table
sdd: sdd1 sdd2 sdd3
md: bind<sda1>
md: bind<sdb1>
md: bind<sdc1>
md: bind<sdd1>
bio: create slab <bio-1> at 1
md/raid1:md0: not clean -- starting background reconstruction
md/raid1:md0: active with 4 out of 4 mirrors
md0: detected capacity change from 0 to 4293906432
md: resync of RAID array md0
md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
md: using 128k window, over a total of 4193268 blocks.
md: bind<sda2>
md: bind<sdb2>
md: bind<sdc2>
md: bind<sdd2>
md/raid:md1: not clean -- starting background reconstruction
md/raid:md1: device sdd2 operational as raid disk 3
md/raid:md1: device sdc2 operational as raid disk 2
md/raid:md1: device sdb2 operational as raid disk 1
md/raid:md1: device sda2 operational as raid disk 0
md/raid:md1: allocated 4272kB
md/raid:md1: raid level 6 active with 4 out of 4 devices, algorithm 2
RAID conf printout:
--- level:6 rd:4 wd:4
disk 0, o:1, dev:sda2
disk 1, o:1, dev:sdb2
disk 2, o:1, dev:sdc2
disk 3, o:1, dev:sdd2
md1: detected capacity change from 0 to 1073610752
md: delaying resync of md1 until md0 has finished (they share one or more physical units)
md: bind<sda3>
md: bind<sdb3>
md: bind<sdc3>
md: bind<sdd3>
md/raid:md2: not clean -- starting background reconstruction
md/raid:md2: device sdd3 operational as raid disk 3
md/raid:md2: device sdc3 operational as raid disk 2
md/raid:md2: device sdb3 operational as raid disk 1
md/raid:md2: device sda3 operational as raid disk 0
md/raid:md2: allocated 4272kB
md/raid:md2: raid level 5 active with 4 out of 4 devices, algorithm 2
RAID conf printout:
--- level:5 rd:4 wd:4
disk 0, o:1, dev:sda3
disk 1, o:1, dev:sdb3
disk 2, o:1, dev:sdc3
disk 3, o:1, dev:sdd3
md2: detected capacity change from 0 to 8987273330688
md: delaying resync of md2 until md0 has finished (they share one or more physical units)
md2: unknown partition table
md/raid:md2: Disk failure on sdd3, disabling device.
<1>md/raid:md2: Operation continuing on 3 devices.
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: cannot remove active disk sdd3 from md2 ...
md: md0: resync done.
md: resync of RAID array md1
md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
md: using 128k window, over a total of 524224 blocks.
RAID conf printout:
--- level:5 rd:4 wd:3
disk 0, o:1, dev:sda3
disk 1, o:1, dev:sdb3
disk 2, o:1, dev:sdc3
disk 3, o:0, dev:sdd3
RAID conf printout:
--- level:5 rd:4 wd:3
disk 0, o:1, dev:sda3
disk 1, o:1, dev:sdb3
disk 2, o:1, dev:sdc3
md: delaying resync of md2 until md1 has finished (they share one or more physical units)
RAID1 conf printout:
--- wd:4 rd:4
disk 0, wo:0, o:1, dev:sda1
disk 1, wo:0, o:1, dev:sdb1
disk 2, wo:0, o:1, dev:sdc1
disk 3, wo:0, o:1, dev:sdd1
md: unbind<sdd3>
md: export_rdev(sdd3)
md: bind<sdd3>
md0: unknown partition table
EXT3-fs: barriers not enabled
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with ordered data mode
md: md1: resync done.
md: resync of RAID array md2
md: minimum _guaranteed_ speed: 50000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
md: using 128k window, over a total of 2925544704 blocks.
md: md2: resync done.
RAID conf printout:
--- level:6 rd:4 wd:4
disk 0, o:1, dev:sda2
disk 1, o:1, dev:sdb2
disk 2, o:1, dev:sdc2
disk 3, o:1, dev:sdd2
RAID conf printout:
--- level:5 rd:4 wd:3
disk 0, o:1, dev:sda3
disk 1, o:1, dev:sdb3
disk 2, o:1, dev:sdc3
RAID conf printout:
--- level:5 rd:4 wd:3
disk 0, o:1, dev:sda3
disk 1, o:1, dev:sdb3
disk 2, o:1, dev:sdc3
disk 3, o:1, dev:sdd3
md: recovery of RAID array md2
md: minimum _guaranteed_ speed: 50000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for recovery.
md: using 128k window, over a total of 2925544704 blocks.
md1: unknown partition table
Adding 1048444k swap on /dev/md1. Priority:-1 extents:1 across:1048444k
EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
EXT3-fs: barriers not enabled
kjournald starting. Commit interval 5 seconds
EXT3-fs (md0): using internal journal
EXT3-fs (md0): mounted filesystem with ordered data mode
scsi: killing requests for dead queue
udevd version 125 started
ata1.00: configured for UDMA/133
ata1: EH complete
ata2.00: configured for UDMA/133
ata2: EH complete
Adding 1048444k swap on /dev/md1. Priority:-1 extents:1 across:1048444k
ata3.00: configured for UDMA/133
ata3: EH complete
ata4.00: configured for UDMA/133
ata4: EH complete
EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: acl,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv1


the part im wondering about is:
md/raid:md2: Disk failure on sdd3, disabling device.
<1>md/raid:md2: Operation continuing on 3 devices.
md: cannot remove active disk sdd3 from md2 ...


but it shows up with 8.1 TB of storage, so, is the disc bad, or is it not?

EDIT: well i swapped disc 4 with disc 3, but still get the same, so the disc is aparrently not bad, the question is then, is it normal behaviour, or is there a problem with
the hardward/enclosure. ill do the tests i did before to provoke the freezing - copying over 3.14 T of data...

attached below is smart status for disc 4 (no errors?)

SMART Information for Disk 4

Model: WDC WD30EFRX-68AX9N0
Serial: WD-WMC1T2686153
Firmware: 80.00A80
SMART Attribute
Raw Read Error Rate 0
Spin Up Time 6266
Start Stop Count 13
Reallocated Sector Count 0
Seek Error Rate 0
Power On Hours 448
Spin Retry Count 0
Calibration Retry Count 0
Power Cycle Count 13
Power-Off Retract Count 11
Load Cycle Count 1
Temperature Celsius 33
Reallocated Event Count 0
Current Pending Sector 0
Offline Uncorrectable 0
UDMA CRC Error Count 0
Multi Zone Error Rate 0
ATA Error Count 0
Message 1 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6

Well its still stable, the nas is also on its default settings, and im thinking maybe the jumbo frames activated made it unstable...

if it keeps running for a week, ill declare it stable, because that would be a first...
Message 2 of 12
StephenB
Guru

Re: Unstable Ultra 6

Problems with jumbo frames can create a lot of odd issues, so your guess is quite possible.

You are entitled to 90-day phone support after purchase [assuming you are not buying used] so you could try using that if you still see these lockups.
Message 3 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6

im keeping it on default settings and letting it run for a week, if i do get probs, that was going to be my next move...
Message 4 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

Testing each disc individually... takes a looooong time....
Message 5 of 12
Commander_Cody
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

Are you running both NIC's
Message 6 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

no, just the one. every setting is as default... i think i have discovered a faulty drive, one of my drives while testing froze my nas after 4 hours, while so far the 2 others i have tested didnt... right now im running the nas with 3 drives to see if it helped removing the seemingly unstable drive...
Message 7 of 12
mdgm-ntgr
NETGEAR Employee Retired

Re: Unstable Ultra 6 (Case Number: 21215060)

Hook the problem drive up to an internal SATA port in your PC and check it using WD LifeGuard Diagnostics. These tests are far more thorough than the short online SMART test run daily. Another option would be the "Disk Test" boot option to run a long offline SMART test but connecting the disk up to your PC is best. If the disk is a failing/bad one you don't want to put it back in the NAS.

SMART tests can help give indications of impending drive failure but they don't always pick up problem disks. The vendor tests are designed to pick up problems with disks, especially tests that take some time.
Message 8 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

so far my testing has marked 2 of my drives as unstable, and 2 as stable, now just to verify they are stable im running the nas with 2 drives, and testing as ive done before, moving 1.14T to the nas... as for testing the unstable drives in a normal pc... well i guess i will have to dig up my old computer...
Message 9 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

well, seems like that didnt help either, both drives that survived transfering 1.14t of data, locks up... going to test the ultra6 with a known good drive... this is driving me nuts

more and more is pointing to a faulty ultra 6...
Message 10 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

Well, ive tested with a known good disc, and i also cleared my NV+ V1 and moved the 4 2TB disk to the ultra... they have been stable, with no issues what so ever, and still the ultra 6 will freeze at random times.. so conclusion is hardware failure somewhere iin the ultra 6.
Message 11 of 12
FireWalkerX
Aspirant

Re: Unstable Ultra 6 (Case Number: 21215060)

Device is being returned, defined DOA by support
Message 12 of 12
Top Contributors
Discussion stats
  • 11 replies
  • 1856 views
  • 0 kudos
  • 4 in conversation
Announcements