× NETGEAR will be terminating ReadyCLOUD service by July 1st, 2023. For more details click here.
Orbi WiFi 7 RBE973
Reply

Re: impossible to register my product

chris734
Aspirant

impossible to register my product

hi, 

 

i don't register my product because i have this message : "Serial Number already registered"

anyone have a solution?

Model: RN10400|ReadyNAS 100 Series 4- Bay (Diskless)
Message 1 of 24
StephenB
Guru

Re: impossible to register my product

Did you purchase it used?

Message 2 of 24
mdgm-ntgr
NETGEAR Employee Retired

Re: impossible to register my product

The warranty is for the original purchase from an authorised reseller.

 

Are you having a problem with your RN104?

Message 3 of 24
chris734
Aspirant

Re: impossible to register my product

Yes I bought it used

Message 4 of 24
chris734
Aspirant

Re: impossible to register my product

Message 5 of 24
StephenB
Guru

Re: impossible to register my product


@chris734 wrote:

Yes I bought it used


Unfortunately Netgear doesn't provide paid support for used products.

Message 6 of 24
chris734
Aspirant

Re: impossible to register my product

I am a systems engineer and I have never seen a software raid (madm) deteriorate so much and all alone. I have 4 WD red drives that have 3 months on them, seriously, I think the linux  (improved) by netgear is destructive. I'd like to at least know what disk configuration they call X-raid on a nas 104 with 4*4TB disks.
If I had that information, maybe I could fix it.
If I don't find a solution, this is the last time in my life I buy a netgear product! and the last also in my work.

 

Model: RN104|ReadyNAS 100 Series
Message 7 of 24
chris734
Aspirant

Re: impossible to register my product

nobody has a solution, I have a friend who has a synology for 7 years without ever problem, me, on my readynas already 3 big problem in 2 years... whitout support. People lose all their data because of you,   I think I will buy a synology, they are much better than you.

Message 8 of 24
mdgm-ntgr
NETGEAR Employee Retired

Re: impossible to register my product

One would need more information. You could send your logs in (see the Sending Logs link in my sig) if you like.

 

Have you tried booting to volume read-only mode?

Message 9 of 24
chris734
Aspirant

Re: impossible to register my product

Thank for your reply, 

 

I just realized that the file system is btrfs... 😞
Could someone from the community give me the result of the command on a nas 104 pen in firmware version 6.9.3. Thank you in advance:

I dont' send my log in attanchement because .zip extension is prohibited , i don't test read only file system, how do I start read only?

Message 10 of 24
chris734
Aspirant

Re: impossible to register my product

you can download my log here https://we.tl/bxBofvG978

Message 11 of 24
StephenB
Guru

Re: impossible to register my product


@chris734 wrote:

 

I dont' send my log in attanchement because .zip extension is prohibited , i don't test read only file system, how do I start read only?


There is some privacy leakage if you post the logs publicly.  That's why @mdgm-ntgr directly you to the "sending logs" link in his signature.  It includes an email address for sending the logs.

 


@chris734 wrote:

how do I start read only?


The hardware manual includes this information on pages 28-29 ( http://www.downloads.netgear.com/files/GDC/READYNAS-100/ReadyNAS_%20OS6_Desktop_HM_EN.pdf )

Message 12 of 24
chris734
Aspirant

Re: impossible to register my product

Okay, thanks, I just sent you the logs.

Message 13 of 24
chris734
Aspirant

Re: impossible to register my product

The read-only start did not give me access to my data, 😞 .

Could someone from the community give me the result of the command on a nas 104 in version 6.9.3 .

btrfs filesystem show

Thank you in advance to the one who will help me

Model: RN104|ReadyNAS 100 Series
Message 14 of 24
StephenB
Guru

Re: impossible to register my product

It looks to me like this is your issue (found in dmesg.log):

[Fri Jul 13 17:50:28 2018] md: kicking non-fresh sdc5 from array!
[Fri Jul 13 17:50:28 2018] md: unbind<sdc5>
[Fri Jul 13 17:50:28 2018] md: export_rdev(sdc5)
[Fri Jul 13 17:50:28 2018] md: kicking non-fresh sda5 from array!
[Fri Jul 13 17:50:28 2018] md: unbind<sda5>

The array is out of sync, possibly due to lost writes.  Was there a power failure (or a forced shutdown of the ReadyNAS)?

 

Try entering mdadm --examine /dev/sd[a-z]5 | egrep 'Event|/dev/sd' and post back with the output you get.

 

 

Message 15 of 24
chris734
Aspirant

Re: impossible to register my product

Hi, thanks so much for your help.
My problem happened when I overheated (I think).
I could no longer connect (ssh and http).
I restarted it, after restarting it only saw a full 10TB volume (then less than 1TB was really used).
After a second restart, he sees 4 volumes.
I'm here...
Another thing I don't understand is that madm seems loaded while Netgear seems to use btrfs on its NAS...
Here is the return of the command:

# mdadm --examine /dev/sd[a-z]5 | egrep 'Event|/dev/sd'
/dev/sda5:
         Events : 4733
/dev/sdb5:
         Events : 4736
/dev/sdc5:
         Events : 4733
/dev/sdd5:
         Events : 4736
Message 16 of 24
chris734
Aspirant

Re: impossible to register my product

hi, 

I have unsync also on this partitions : 

 

/dev/sd[a-d]3
/dev/sd[a-d]4
/dev/sd[a-d]5

 

thanks by advance for you hlep. My problem is that i don't know the standard configuration for this nas. if I knew that, I would have a chance to fix it, I don't understand why nobody here can give me this informations..

Model: RN104|ReadyNAS 100 Series
Message 17 of 24
StephenB
Guru

Re: impossible to register my product

The NAS uses mdadm for RAID, because the BTRFS RAID features aren't production ready.  

 

I haven't attempted to manually mount a RAID array on an OS-6 system.  But I think you could try something like

# mdadm --assemble --force --scan
# btrfs device scan
# btrfs fi show
# mount -o ro/dev/md127 /data

Note I am suggesting mounting the volume as read-only until you can confirm that there is no corruption of the file system.

 

Message 18 of 24
chris734
Aspirant

Re: impossible to register my product

Ok, thank for this work, isn't ok, it's a little best : 

i use "--really-force" because --force have not effect,  look the log : 

# mdadm --assemble -force --scan
mdadm: NOT forcing event count in /dev/sda5(2) from 4733 up to 4736
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: /dev/md/data-2 assembled from 2 drives - not enough to start the array.
mdadm: NOT forcing event count in /dev/sdc4(0) from 9888 up to 9891
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: /dev/md/data-1 assembled from 2 drives - not enough to start the array.
mdadm: NOT forcing event count in /dev/sdc3(1) from 27559 up to 27562
mdadm: You can use --really-force to do that (DANGEROUS)
mdadm: /dev/md/data-0 assembled from 2 drives - not enough to start the array.
mdadm: /dev/md/1_0 assembled from 1 drive - not enough to start the array.
mdadm: No arrays found in config file or automatically


btrfs device scan
Scanning for Btrfs filesystems
(return nothing)
btrfs fi show
(return nothing)
mount -o ro/dev/md127 /data
mount: can't find LABEL=0e356676:data

So i try :

mdadm --assemble --really-force --scan

mdadm: forcing event count in /dev/sda5(2) from 4733 upto 4736
mdadm: forcing event count in /dev/sdc5(3) from 4733 upto 4736
mdadm: clearing FAULTY flag for device 3 in /dev/md/data-2 for /dev/sda5
mdadm: clearing FAULTY flag for device 1 in /dev/md/data-2 for /dev/sdc5
mdadm: Marking array /dev/md/data-2 as 'clean'
mdadm: /dev/md/data-2 has been started with 4 drives.
mdadm: forcing event count in /dev/sdc4(0) from 9888 upto 9891
mdadm: forcing event count in /dev/sda4(2) from 9888 upto 9891
mdadm: /dev/md/data-1 assembled from 4 drives - not enough to start the array.
mdadm: forcing event count in /dev/sdc3(1) from 27559 upto 27562
mdadm: forcing event count in /dev/sda3(3) from 27559 upto 27562
mdadm: clearing FAULTY flag for device 1 in /dev/md/data-0 for /dev/sdc3
mdadm: clearing FAULTY flag for device 3 in /dev/md/data-0 for /dev/sda3
mdadm: Marking array /dev/md/data-0 as 'clean'
mdadm: /dev/md/data-0 has been started with 4 drives.
mdadm: /dev/md/1_0 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/data-1 has been started with 4 drives.
mdadm: /dev/md/1_0 assembled from 1 drive - not enough to start the array.
mdadm: /dev/md/1_0 assembled from 1 drive - not enough to start the array.

root@Nas:~# btrfs device scan
Scanning for Btrfs filesystems
root@Nas:~# btrfs fi show
Label: '0e356676:data' uuid: 111e17a0-275f-492f-8efe-763b2e680abe
Total devices 3 FS bytes used 685.07GiB
devid 1 size 5.44TiB used 689.02GiB path /dev/md126
devid 2 size 2.73TiB used 0.00B path /dev/md125
devid 3 size 2.73TiB used 0.00B path /dev/md127

mount -o ro/dev/md126 /data
mount: wrong fs type, bad option, bad superblock on /dev/md125,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
dmesg | tail or so.

dmesg :
[206091.700507] md/raid:md127: device sdb5 operational as raid disk 0
[206091.700528] md/raid:md127: device sdc5 operational as raid disk 3
[206091.700537] md/raid:md127: device sda5 operational as raid disk 2
[206091.700544] md/raid:md127: device sdd5 operational as raid disk 1
[206091.702520] md/raid:md127: allocated 4294kB
[206091.707565] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[206091.707583] RAID conf printout:
[206091.707590] --- level:5 rd:4 wd:4
[206091.707599] disk 0, o:1, dev:sdb5
[206091.707606] disk 1, o:1, dev:sdd5
[206091.707613] disk 2, o:1, dev:sda5
[206091.707620] disk 3, o:1, dev:sdc5
[206091.720371] md127: detected capacity change from 0 to 3000179490816
[206092.078846] BTRFS: device label 0e356676:data devid 3 transid 37485 /dev/md127
[206092.215249] BTRFS info (device md127): setting nodatasum
[206092.215274] BTRFS info (device md127): has skinny extents
[206092.269780] BTRFS error (device md127): failed to read the system array: -5
[206092.291085] BTRFS error (device md127): open_ctree failed
[206092.347978] md: md126 stopped.
[206092.421870] md: bind<sdb4>
[206092.422458] md: bind<sda4>
[206092.422979] md: bind<sdd4>
[206092.423491] md: bind<sdc4>
[206092.424578] md: md126 stopped.
[206092.424612] md: unbind<sdc4>
[206092.424637] md: export_rdev(sdc4)
[206092.424815] md: unbind<sdd4>
[206092.424834] md: export_rdev(sdd4)
[206092.424957] md: unbind<sda4>
[206092.424975] md: export_rdev(sda4)
[206092.425093] md: unbind<sdb4>
[206092.425111] md: export_rdev(sdb4)
[206092.513348] md: md126 stopped.
[206092.677339] md: bind<sdc3>
[206092.677891] md: bind<sdb3>
[206092.684506] md: bind<sda3>
[206092.685045] md: bind<sdd3>
[206092.720393] md/raid:md126: device sdd3 operational as raid disk 0
[206092.720414] md/raid:md126: device sda3 operational as raid disk 3
[206092.720422] md/raid:md126: device sdb3 operational as raid disk 2
[206092.720430] md/raid:md126: device sdc3 operational as raid disk 1
[206092.722088] md/raid:md126: allocated 4294kB
[206092.731956] md/raid:md126: raid level 5 active with 4 out of 4 devices, algorithm 2
[206092.731973] RAID conf printout:
[206092.731980] --- level:5 rd:4 wd:4
[206092.731988] disk 0, o:1, dev:sdd3
[206092.731995] disk 1, o:1, dev:sdc3
[206092.732003] disk 2, o:1, dev:sdb3
[206092.732010] disk 3, o:1, dev:sda3
[206092.753033] md126: detected capacity change from 0 to 5986298363904
[206093.258553] BTRFS: device label 0e356676:data devid 1 transid 37485 /dev/md126
[206093.382343] md: md125 stopped.
[206093.420611] md: bind<sdc2>
[206093.421723] md: md125 stopped.
[206093.421756] md: unbind<sdc2>
[206093.421780] md: export_rdev(sdc2)
[206093.741556] md: md125 stopped.
[206093.760535] md: bind<sdb4>
[206093.761170] md: bind<sda4>
[206093.765525] md: bind<sdd4>
[206093.770133] md: bind<sdc4>
[206093.793645] md/raid:md125: device sdc4 operational as raid disk 0
[206093.793664] md/raid:md125: device sdd4 operational as raid disk 3
[206093.793673] md/raid:md125: device sda4 operational as raid disk 2
[206093.793680] md/raid:md125: device sdb4 operational as raid disk 1
[206093.795355] md/raid:md125: allocated 4294kB
[206093.795736] md/raid:md125: raid level 5 active with 4 out of 4 devices, algorithm 2
[206093.795747] RAID conf printout:
[206093.795754] --- level:5 rd:4 wd:4
[206093.795761] disk 0, o:1, dev:sdc4
[206093.795768] disk 1, o:1, dev:sdb4
[206093.795776] disk 2, o:1, dev:sda4
[206093.795783] disk 3, o:1, dev:sdd4
[206093.820383] md125: detected capacity change from 0 to 3000179490816
[206094.353848] BTRFS: device label 0e356676:data devid 2 transid 37485 /dev/md125
[206094.487569] md: md124 stopped.
[206094.510461] md: bind<sdc2>
[206094.511563] md: md124 stopped.
[206094.511600] md: unbind<sdc2>
[206094.511625] md: export_rdev(sdc2)
[206094.924525] md: md124 stopped.
[206094.939646] md: bind<sdc2>
[206094.941350] md: md124 stopped.
[206094.941388] md: unbind<sdc2>
[206094.941416] md: export_rdev(sdc2)
[206118.617487] BTRFS info (device md125): setting nodatasum
[206118.617518] BTRFS info (device md125): unrecognized mount option 'ro/dev/md127'
[206118.630395] BTRFS error (device md125): open_ctree failed
[206166.628712] BTRFS info (device md125): setting nodatasum
[206166.628741] BTRFS info (device md125): unrecognized mount option 'ro/dev/md126'
[206166.640391] BTRFS error (device md125): open_ctree failed

 

Message 19 of 24
chris734
Aspirant

Re: impossible to register my product

My System read-only mounted sice 15 message 🙂

 

I write very rarely on my NAS, it allows me to save my family photos from time to time. My hard drives are only 3 months old, (WD Red 4TB) .   I think like you, has a de-syncronization, I don't think I have a corrupted file. I don't know how to sync again, or mount a disk to get my data.

 

Message 20 of 24
StephenB
Guru

Re: impossible to register my product

Did you try rebooting into read-only mode after you forced the event counters to be synchronized?

Message 21 of 24
mdgm-ntgr
NETGEAR Employee Retired

Re: impossible to register my product

I can see from your logs that you used to use ST2000DM001 disks. Those disks were notoriously unreliable. You would have had problems using those disks in any system.

 

If you can access your data now I would backup that data, verify the data is O.K. then do a factory reset and restore your data from backup.

 

You'd get a clean single-layer array with your new 4TB disks as opposed to the triple layer array you have at the moment.

 

Your dmesg.log shows that 2 disks dropped out of your array, sda and sdc (which are disks 1 and 2 - or 0 and 1 if counting from 0 - as you can see in disk_info.log).

 

You may wish to check the health of these disks e.g. using WD Data LifeGuard Diagnostics.

Message 22 of 24
chris734
Aspirant

Re: impossible to register my product

@mdmg, hello and thank you for your answer. I removed the ST2000DM001 disk several months ago for the reasons you gave.
The problem I have now happened after, I replaced the records several months ago.
Right now I don't have access to the data.
If I try to mount the discs I get the following back:

 mount ro/dev/md127 /data/
mount: special device ro/dev/md127 does not exist
root@Nas:~# mount -o ro/dev/md127 /data
mount: wrong fs type, bad option, bad superblock on /dev/md125,
       missing codepage or helper program, or other error

btrfs fi show
Label: '0e356676:data' uuid: 111e17a0-275f-492f-8efe-763b2e680abe
Total devices 3 FS bytes used 685.07GiB
devid 1 size 5.44TiB used 689.02GiB path /dev/md125
devid 2 size 2.73TiB used 0.00B path /dev/md126
devid 3 size 2.73TiB used 0.00B path /dev/md127

blkid
/dev/sdd1: UUID="354ec735-9304-f02f-f32d-214e04d0c697" UUID_SUB="735ba339-b9db-7f2c-589e-07506e006c4c" LABEL="0e356676:0" TYPE="linux_raid_member" PARTUUID="e778ca4b-10e7-4d7e-b31a-3663f9f95cdb"
/dev/sda1: UUID="354ec735-9304-f02f-f32d-214e04d0c697" UUID_SUB="2f838f9e-bd19-92e0-d7ee-aad51ed4ddf4" LABEL="0e356676:0" TYPE="linux_raid_member" PARTUUID="9c4e07f6-af4c-4b15-a346-8fb0d06340f0"
/dev/md0: LABEL="0e356676:root" UUID="34b6e138-f86a-4c7c-965f-17c0f1fa5d46" TYPE="ext4"
/dev/sdd2: UUID="a5917a22-a275-4637-3105-226ee484a673" UUID_SUB="a30a0142-8800-e866-25c2-f7f0b080579d" LABEL="0e356676:1" TYPE="linux_raid_member" PARTUUID="b86d621e-6d8e-41b9-8fac-5457508a29ce"
/dev/sda2: UUID="a5917a22-a275-4637-3105-226ee484a673" UUID_SUB="1bcd31c0-88f7-620a-5120-cd0970f036df" LABEL="0e356676:1" TYPE="linux_raid_member" PARTUUID="63195e24-8256-44e5-a219-ec72d988eaf8"
/dev/md1: LABEL="swap" UUID="6f3319c0-d316-4f54-81b5-5c1182d82590" TYPE="swap"
/dev/sdd3: UUID="d4c318e9-b3db-fc1d-47a6-93064fb98f23" UUID_SUB="475271d6-5e3b-2ee1-7d54-d67076bb93de" LABEL="0e356676:data-0" TYPE="linux_raid_member" PARTUUID="5bd1eabc-cfb0-4120-b064-d655f5a6ddf1"
/dev/sda3: UUID="d4c318e9-b3db-fc1d-47a6-93064fb98f23" UUID_SUB="a91602ea-8afb-aabc-8799-678b85d88a72" LABEL="0e356676:data-0" TYPE="linux_raid_member" PARTUUID="742d9b7a-599c-4752-bd99-e3fa7629a1bd"
/dev/md127: LABEL="0e356676:data" UUID="111e17a0-275f-492f-8efe-763b2e680abe" UUID_SUB="fbfcdb37-d69a-47b6-8723-4efeeb24ff3f" TYPE="btrfs"
/dev/sdd4: UUID="e41f5dee-89bd-6552-aafe-b6ca94ac3287" UUID_SUB="90637685-0a45-df47-03d4-13107bce11b7" LABEL="0e356676:data-1" TYPE="linux_raid_member" PARTUUID="13415962-0e82-4fc6-b6ba-6c9a92f93ab7"
/dev/sdd5: UUID="7c75b27f-4e7a-20ee-6b8f-c2feed2875aa" UUID_SUB="08156618-d560-4569-5fdc-e2fc8701bad1" LABEL="0e356676:data-2" TYPE="linux_raid_member" PARTUUID="0128504c-98c5-4c4b-8d39-594a5f1dda07"
/dev/sda5: UUID="7c75b27f-4e7a-20ee-6b8f-c2feed2875aa" UUID_SUB="8ef0914d-55a8-7451-f860-d621b9efcd77" LABEL="0e356676:data-2" TYPE="linux_raid_member" PARTUUID="40b0b5e2-bd56-4e76-aa07-01f82d3fdbe4"
/dev/sdc1: UUID="354ec735-9304-f02f-f32d-214e04d0c697" UUID_SUB="388f120f-2d94-d24c-903f-785c2d054af8" LABEL="0e356676:0" TYPE="linux_raid_member" PARTUUID="bec5b25d-9380-4377-bdcc-ecea2d142f15"
/dev/sdc2: UUID="cde66c3c-2805-a4da-b0f1-78ed1c8baf96" UUID_SUB="c9c5c043-ce20-6959-595c-c47aeab3b64e" LABEL="0e356676:1" TYPE="linux_raid_member" PARTUUID="ab29d6d1-3f5d-4e59-b6fa-f0fa570ddcab"
/dev/sdc3: UUID="d4c318e9-b3db-fc1d-47a6-93064fb98f23" UUID_SUB="430e7b30-c299-bb71-f52a-7e394950ffbd" LABEL="0e356676:data-0" TYPE="linux_raid_member" PARTUUID="64107262-0dcf-4841-ac63-814a911e72fa"
/dev/sdc5: UUID="7c75b27f-4e7a-20ee-6b8f-c2feed2875aa" UUID_SUB="1a839cdf-acab-9645-dda4-8e1d33a96e60" LABEL="0e356676:data-2" TYPE="linux_raid_member" PARTUUID="e40f0a56-d3e5-445a-8457-60b9c383a5f9"
/dev/sdb1: UUID="354ec735-9304-f02f-f32d-214e04d0c697" UUID_SUB="312338c5-fae3-0edf-ee49-cf9d4fa5921d" LABEL="0e356676:0" TYPE="linux_raid_member" PARTUUID="9daa9c2b-0594-4189-a870-4d48acc0339d"
/dev/sdb2: UUID="a5917a22-a275-4637-3105-226ee484a673" UUID_SUB="66171875-528d-be3a-ae0b-c88969703266" LABEL="0e356676:1" TYPE="linux_raid_member" PARTUUID="b929f4cb-3869-4663-90fc-7197fae2bdad"
/dev/sdb3: UUID="d4c318e9-b3db-fc1d-47a6-93064fb98f23" UUID_SUB="85e5c70a-fc99-86d5-f434-9ebc24988eca" LABEL="0e356676:data-0" TYPE="linux_raid_member" PARTUUID="6f8ac3f8-996d-4158-846b-34da66ebdfa4"
/dev/sdb5: UUID="7c75b27f-4e7a-20ee-6b8f-c2feed2875aa" UUID_SUB="2b4baa68-2175-ba1a-c620-b0b5316cf543" LABEL="0e356676:data-2" TYPE="linux_raid_member" PARTUUID="28c64fbb-cefb-4e96-bd5d-44a43a73f3d5"
/dev/sde2: LABEL="Bck_ReadyNas" UUID="0E584F05584EEB53" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="97c30fd7-6d98-4d10-a5d4-8bf64261aa19"
/dev/ubi0_0: UUID="7076e5d7-2743-4091-8e80-446d5762c02b" TYPE="ubifs"
/dev/sda4: UUID="e41f5dee-89bd-6552-aafe-b6ca94ac3287" UUID_SUB="85c7ba5b-f400-c93e-cc95-fa1cdae4c3da" LABEL="0e356676:data-1" TYPE="linux_raid_member" PARTUUID="88b781d8-c64d-4df0-a3b6-79ca8ae6b894"
/dev/sdc4: UUID="e41f5dee-89bd-6552-aafe-b6ca94ac3287" UUID_SUB="d46e93bf-df1d-1cea-2087-88327ebe49d3" LABEL="0e356676:data-1" TYPE="linux_raid_member" PARTUUID="8a6c2f79-6e72-406f-8e58-87d6511a8bb4"
/dev/sdb4: UUID="e41f5dee-89bd-6552-aafe-b6ca94ac3287" UUID_SUB="e08c7310-7ffd-d0e0-0205-eb4272d10a55" LABEL="0e356676:data-1" TYPE="linux_raid_member" PARTUUID="fde0a720-99f5-4637-84b6-eee48b74ee50"
/dev/md125: LABEL="0e356676:data" UUID="111e17a0-275f-492f-8efe-763b2e680abe" UUID_SUB="3fce1e99-057c-4710-ab76-25ec1decd8b7" TYPE="btrfs"
/dev/md126: LABEL="0e356676:data" UUID="111e17a0-275f-492f-8efe-763b2e680abe" UUID_SUB="544aba65-6cb4-4ecf-91ca-c1333b80da3b" TYPE="btrfs"
/dev/sde1: PARTLABEL="Microsoft reserved partition" PARTUUID="ea521a51-6000-4b47-819b-1e1be04894df"


StephenB My system is currently booted to read-only.

I really don't think one of my disks is degraded, because they are new, I think the raid configuration information is corrupted. I don't know how to fix this.

Translated with www.DeepL.com/Translator

Message 23 of 24
StephenB
Guru

Re: impossible to register my product


@chris734 wrote:

StephenB My system is currently booted to read-only.


Did you reboot it after you forced the event counters with mdadm?

 


@chris734 wrote:

I really don't think one of my disks is degraded, because they are new, I think the raid configuration information is corrupted. I don't know how to fix this.


To clarify the terminology a bit:  A RAID-5 volume is degraded when one disk is missing (or has failed).  The data is still available, thanks to the RAID redundancy.  Replacing the disk cures that problem.

 

I generally won't describe a disk as "degraded", because that creates some confusion with the degraded volume status.  Instead I will say it is "failing" or "failed".

 

In your specific case, two of the disks lost writes from the NAS - that is why the event counters didn't match.   Forcing the event counters allows the RAID array to be assembled.  But there is damage/corruption to the file system that runs on the RAID array, due to the lost writes.  How extensive that damage is depends on exactly what was lost.  A small number of writes might damage a single file, or a folder - or could compromise the entire volume (for instance, corrupting the superblock).

 

So there likely will be some need for file system repair.  It's possible that your mount method is causing the problem (I'm not seeing any btrfs commands before the mount).  That is why I'm asking if you rebooted after the event counter forcing, as the system would issue the correct commands.

 

Normally I'd expect to see

# btrfs device scan
# btrfs fi show
# mount -o ro /dev/md127 /data

 

 I'm not seeing the device scan, and the mount command appears to be missing the -o.

Message 24 of 24
Top Contributors
Discussion stats
  • 23 replies
  • 3881 views
  • 0 kudos
  • 3 in conversation
Announcements