NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Hard Disk
494 TopicsRN104: ghost “NG-8TB-Seagate” volume (RAID unknown) flapping Inactive/Unprotected
Hi all, I have a ReadyNAS RN104 that’s working fine from the data point of view, but the volume configuration seems corrupted and is generating constant volume health alerts that I cannot clear. I’m hoping someone familiar with the ReadyNAS OS 6 config DB can advise on a safe way to remove the ghost volume entries without wiping any data. Hardware / firmware: - Model: RN104 - OS version: 6.10.10 - Disks: - sda: 2 TB (NG-WDRED-2TB-1) - sdb: 6 TB (NG-WDRP-6TB-1) - sdc: 3 TB (NG-WDRED-3TB-2) - sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB Symptom: In the web UI → System → Volumes I see 6 volumes even though I only have 4 disks. The top four are green JBOD volumes with data and look healthy: - NG-8TB-Seagate (JBOD, ~7.27 TB, ~2.64 TB used) - NG-WDRED-3TB-2 (JBOD, ~2.72 TB, ~1.17 TB used) - NG-WDRP-6TB-1 (JBOD, ~5.45 TB, ~4.28 TB used) - NG-WDRED-2TB-1 (JBOD, ~1.81 TB, ~0.2 TB used) Below those, there are two blue entries with 0 data and “RAID unknown”: - NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown) - NG-8TB-Seagate (0 data, 0 free, RAID unknown) I believe these are stale/ghost volumes from the old failed 3 TB drive and some mis-step when I first added the 8 TB. They show only “Disk test” and “Destroy” as options. When I try “Destroy” on the old 3 TB entry, it appears to succeed but the entry comes straight back. In the logs I constantly get messages like: - “Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected.” - “Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive.” These repeat every few seconds/minutes and are clearly coming from the ghost NG-8TB-Seagate entry (the 0-data, RAID-unknown one), not the real 8 TB JBOD volume which is mounted and in use. SSH diagnostics (all arrays look clean): lsblk sda 1.8T ├─sda1 -> md0 (/) ├─sda2 -> md1 (swap) └─sda3 -> md126 /NG-WDRED-2TB-1 sdb 5.5T ├─sdb1 -> md0 (/) ├─sdb2 -> md1 (swap) └─sdb3 -> md127 /NG-WDRP-6TB-1 sdc 2.7T ├─sdc1 -> md0 ├─sdc2 -> md1 └─sdc3 -> md125 /NG-WDRED-3TB-2 sdd 7.3T ├─sdd1 -> md0 ├─sdd2 -> md1 └─sdd3 -> md124 /NG-8TB-Seagate /proc/mdstat md124 : active raid1 sdd3 7809175808 blocks super 1.2 [1/1] [U] md125 : active raid1 sdc3 2925415808 blocks super 1.2 [1/1] [U] md126 : active raid1 sda3 1948663808 blocks super 1.2 [1/1] [U] md127 : active raid1 sdb3 5855671808 blocks super 1.2 [1/1] [U] md1 : active raid10 sda2 sdd2 sdc2 sdb2[1][2][3] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md0 : active raid1 sda1 sdb1 sdd1 sdc1[3][4][1] 4190208 blocks super 1.2 [4/4] [UUUU] /root/mdadm-detail-scan.txt ARRAY /dev/md/0 metadata=1.2 name=0e34093c:0 UUID=b1079eff:ca275c6a:4df7d648:6f176c9c ARRAY /dev/md/1 metadata=1.2 name=0e34093c:1 UUID=9ecdbab8:7ecf3da9:299f9966:0fa46d04 ARRAY /dev/md/NG-WDRP-6TB-1-0 metadata=1.2 name=0e34093c:NG-WDRP-6TB-1-0 UUID=1d40ffff:601db1f8:20e41e54:f5650fa6 ARRAY /dev/md/NG-WDRED-2TB-1-0 metadata=1.2 name=0e34093c:NG-WDRED-2TB-1-0 UUID=d69ab251:67e359ac:16c640ee:2a0409c0 ARRAY /dev/md/NG-WDRED-3TB-2-0 metadata=1.2 name=0e34093c:NG-WDRED-3TB-2-0 UUID=1c072ab5:ea01a5d6:646d6d07:76776925 ARRAY /dev/md/NG-8TB-Seagate-0 metadata=1.2 name=0e34093c:NG-8TB-Seagate-0 UUID=4a957007:c3c04e0b:0aacb1df:3a59d9e8 /root/btrfs-filesystems.txt Label: '0e34093c:NG-WDRP-6TB-1' uuid: 28fcc8ab-9e63-4529-83f4-1e9d4708bd1b Total devices 1 FS bytes used 4.27TiB devid 1 size 5.45TiB used 4.28TiB path /dev/md127 Label: '0e34093c:NG-8TB-Seagate' uuid: 2a912336-755a-48e6-bcee-fd373ae8e6df Total devices 1 FS bytes used 2.63TiB devid 1 size 7.27TiB used 2.64TiB path /dev/md124 Label: '0e34093c:NG-WDRED-3TB-2' uuid: fbd95853-4f22-4041-8583-4e0853decf9b Total devices 1 FS bytes used 1.17TiB devid 1 size 2.72TiB used 1.17TiB path /dev/md125 Label: '0e34093c:NG-WDRED-2TB-1' uuid: 1b34cda6-1cc8-4360-9ca6-4c209100aa48 Total devices 1 FS bytes used 200.08GiB devid 1 size 1.81TiB used 220.02GiB path /dev/md126 So from the RAID/Btrfs point of view, everything looks consistent: four md data arrays, four Btrfs filesystems, all mounted and in use. There is no extra md device and no Btrfs filesystem corresponding to the blue “RAID unknown” ghost NG-8TB-Seagate volume. What I’ve tried: - Using the GUI “Destroy” on the blue NG-WDRED-3TB-1 volume: it disappears briefly but comes back. - Running btrfs scrub on the real NG-8TB-Seagate volume. - Restarting services and rebooting; the ghost entries and the Inactive/Unprotected log spam persist. What I’m asking for: I’d like guidance on how to safely clean up the configuration/database so that the ghost NG-8TB-Seagate and NG-WDRED-3TB-1 volumes are removed from the ReadyNAS UI and stop generating volume-health events, without destroying the real md124/md125/md126/md127 arrays or their Btrfs filesystems. I’m comfortable with SSH and sqlite3 if needed, but I don’t know the internal ReadyNAS schema, so I’d really appreciate precise instructions like: - which DB file to open; - which table(s)/row(s) represent these phantom volumes; - exactly what to delete/change; - and which services to restart afterwards. I do have backups of the most critical data, but I’d obviously prefer not to wipe and rebuild the entire box just to clear two stale volume objects. Thanks in advance for any pointers.Cannot access data on my RN104
I have 4 x 4Tb drives in my NAS, and all are showing solid blue lights on the physical device. The performance screen in the ReadyNAS app shows green for all 4 drives and performing as expected for temperature etc. But the Volumes screen is odd, as all 4 drives are red in the image of the NAS? Why is it asking me to remove inactive volumes? And should I remove the 10.9TB volume or the other one? My laptop still sees the NAS, and the partitions/folders I created on it, but cannot access the data, coming up with error message saying it cannot reconnect as the local device name is already in use. How do I resolve this as I have a lifetime of photos and other data stored on the NAS (I'm assuming that the data hasn't been lost, I just can't access it?) Many thanks to whoever helps. :)Transfer the hard drive from a failing Readynas 312? Planning for inevitable failure..
I've got a RN312, firmware 6.10.10. I kept extras in case any failed, and I think one is on it's way. It humms a bit louder than the normal (and louder than the others I have), and caused interference with the sound from the computer. The Readynas has two 20TB Toshiba hard drives, which have functioned perfectly. I'm wondering if anyone has successfully just transferred a working hard drive from one Readynas 312 to another; would it just boot up normally? As long as the firmware matches? Otherwise I'd have to reinstall a backup to a different 312 which of course, takes f-o-r-e-v-e-r. I'll leave this up, if anyone knows the answer to this. If not, I will try it with a much smaller drive (500MB) first, to see if it's feasible. Thanks everyone. Oh, for anyone who doesn't know, the 20 TB CMR Toshiba drives work just fine. nothing special needed, just install the drives, plug the nas in, and it goes to work all by itself just perfectly, as usual. I have two others working like this one.Prb sur une baie Nas 10400
Bonjour, J ai un souci sur mon nas rn10400, J ai la baie 3 qui n alimente plus mon disque. Lorsque je mets un disque qui fonctionne sur la baie 1,2,4 sur la baie 3, je vois que la baie 3 n alimente pas mon disque, et donc n est pas reconnu sur la baie 3, quelqu un a déjà eu ce problème? Merci234Views0likes1Comment524X Can't factory reset, help
Using 524X with firmware 6.10.9 Finally had a disk fail. the disk test option from front view reporting error, but no reasons given, smart status wasn't showing errors. well I tried to resync it just in case, but that just led to corrupted volumed, so then I basically did a boot menu disk test...which ran all the way through and reported one disk with failures, so I pulled that disk out and attempted to do a factory reset with the 3 remaining disks (which have some corrupted volume state). The factory reset from boot menu fails and raider gives message about corrupt root. One of the disks showed up as RED in front view and I see message about removing the other two in order to use it, or something.. well anyway I tried to remove that disk and did a factory reset successfully with just two disks..and that worked fine. However, then its reseting as Raid1 rather then raid5. I tried to "format" the one showing up as red, didn't seem to make any difference. or maybe didn't work I can't remember now. Eventually I figured out it was showing the old corrupted volume still, so I managed to destroy that volume and then it no longer displayed as red, however doing another factory reset with all three drives, hoping for raid5, fails...and raidar says corrupt root. What is the secret method to completely wipe all three disks and factory reset this thing to a clean slate? as I understand it, ReadyNAS OS is doing something to prevent me from being able to wipe it as as protection to make sure we don't accidentally lose important data..but in this case I want to wipe it, completely and start over...is there no way to do this?ReadyNAS 316 additional drives
I have two 3.0TB WD Red drives (WD30EFRX, September 2014) in my 6 bay RE316. It's time to expand my data capacity, and I was hoping someone could advise on my best options. Would I need to buy another pair of the same drives to double capacity? Or can I install a single higher capacity drive? Thanks, MaxSolvedReadyNAS Pro 6 RNDP6000-200 on OS6 - What is the Maximum Hard Drive Size?
Hey everyone, I have a ReadyNAS Pro 6 running the latest version of OS 6 and I was wondering if anyone knew what the maxium hard drive capacity is? I'm currently running 6x 3tb drives, though I'm looking at potentially purchasing 6x 10tb or 12tb drives and I wanted to make sure this would work. I'm uncertain if my NAS is the original RNDP6000 or V2. Unsure if that matters. I've upgraded the NAS to a Core2Duo CPU and upped the ram to 4gb as well. Thanks in advance!Solved4.1KViews0likes2Commentsnetgear readynas 1100
Hello i am new here and i have a netgear readynas 1100 with Firmware: RAIDiator 4.1.16 [1.00a147] I have 3 of 2 tb Seagate ST2000DL003-9VT166 [ 1863 GB ] and are all healthy. when I place them, ιτ sees two of the three, the other one is blinking yellow, I read that it gets up to 4 tb, i put 6 to get 4 with raid 5 . Is any solution avaliabe?