NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Other Discussions
7474 TopicsRN104: ghost “NG-8TB-Seagate” volume (RAID unknown) flapping Inactive/Unprotected
Hi all, I have a ReadyNAS RN104 that’s working fine from the data point of view, but the volume configuration seems corrupted and is generating constant volume health alerts that I cannot clear. I’m hoping someone familiar with the ReadyNAS OS 6 config DB can advise on a safe way to remove the ghost volume entries without wiping any data. Hardware / firmware: - Model: RN104 - OS version: 6.10.10 - Disks: - sda: 2 TB (NG-WDRED-2TB-1) - sdb: 6 TB (NG-WDRP-6TB-1) - sdc: 3 TB (NG-WDRED-3TB-2) - sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB Symptom: In the web UI → System → Volumes I see 6 volumes even though I only have 4 disks. The top four are green JBOD volumes with data and look healthy: - NG-8TB-Seagate (JBOD, ~7.27 TB, ~2.64 TB used) - NG-WDRED-3TB-2 (JBOD, ~2.72 TB, ~1.17 TB used) - NG-WDRP-6TB-1 (JBOD, ~5.45 TB, ~4.28 TB used) - NG-WDRED-2TB-1 (JBOD, ~1.81 TB, ~0.2 TB used) Below those, there are two blue entries with 0 data and “RAID unknown”: - NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown) - NG-8TB-Seagate (0 data, 0 free, RAID unknown) I believe these are stale/ghost volumes from the old failed 3 TB drive and some mis-step when I first added the 8 TB. They show only “Disk test” and “Destroy” as options. When I try “Destroy” on the old 3 TB entry, it appears to succeed but the entry comes straight back. In the logs I constantly get messages like: - “Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected.” - “Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive.” These repeat every few seconds/minutes and are clearly coming from the ghost NG-8TB-Seagate entry (the 0-data, RAID-unknown one), not the real 8 TB JBOD volume which is mounted and in use. SSH diagnostics (all arrays look clean): lsblk sda 1.8T ├─sda1 -> md0 (/) ├─sda2 -> md1 (swap) └─sda3 -> md126 /NG-WDRED-2TB-1 sdb 5.5T ├─sdb1 -> md0 (/) ├─sdb2 -> md1 (swap) └─sdb3 -> md127 /NG-WDRP-6TB-1 sdc 2.7T ├─sdc1 -> md0 ├─sdc2 -> md1 └─sdc3 -> md125 /NG-WDRED-3TB-2 sdd 7.3T ├─sdd1 -> md0 ├─sdd2 -> md1 └─sdd3 -> md124 /NG-8TB-Seagate /proc/mdstat md124 : active raid1 sdd3 7809175808 blocks super 1.2 [1/1] [U] md125 : active raid1 sdc3 2925415808 blocks super 1.2 [1/1] [U] md126 : active raid1 sda3 1948663808 blocks super 1.2 [1/1] [U] md127 : active raid1 sdb3 5855671808 blocks super 1.2 [1/1] [U] md1 : active raid10 sda2 sdd2 sdc2 sdb2[1][2][3] 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md0 : active raid1 sda1 sdb1 sdd1 sdc1[3][4][1] 4190208 blocks super 1.2 [4/4] [UUUU] /root/mdadm-detail-scan.txt ARRAY /dev/md/0 metadata=1.2 name=0e34093c:0 UUID=b1079eff:ca275c6a:4df7d648:6f176c9c ARRAY /dev/md/1 metadata=1.2 name=0e34093c:1 UUID=9ecdbab8:7ecf3da9:299f9966:0fa46d04 ARRAY /dev/md/NG-WDRP-6TB-1-0 metadata=1.2 name=0e34093c:NG-WDRP-6TB-1-0 UUID=1d40ffff:601db1f8:20e41e54:f5650fa6 ARRAY /dev/md/NG-WDRED-2TB-1-0 metadata=1.2 name=0e34093c:NG-WDRED-2TB-1-0 UUID=d69ab251:67e359ac:16c640ee:2a0409c0 ARRAY /dev/md/NG-WDRED-3TB-2-0 metadata=1.2 name=0e34093c:NG-WDRED-3TB-2-0 UUID=1c072ab5:ea01a5d6:646d6d07:76776925 ARRAY /dev/md/NG-8TB-Seagate-0 metadata=1.2 name=0e34093c:NG-8TB-Seagate-0 UUID=4a957007:c3c04e0b:0aacb1df:3a59d9e8 /root/btrfs-filesystems.txt Label: '0e34093c:NG-WDRP-6TB-1' uuid: 28fcc8ab-9e63-4529-83f4-1e9d4708bd1b Total devices 1 FS bytes used 4.27TiB devid 1 size 5.45TiB used 4.28TiB path /dev/md127 Label: '0e34093c:NG-8TB-Seagate' uuid: 2a912336-755a-48e6-bcee-fd373ae8e6df Total devices 1 FS bytes used 2.63TiB devid 1 size 7.27TiB used 2.64TiB path /dev/md124 Label: '0e34093c:NG-WDRED-3TB-2' uuid: fbd95853-4f22-4041-8583-4e0853decf9b Total devices 1 FS bytes used 1.17TiB devid 1 size 2.72TiB used 1.17TiB path /dev/md125 Label: '0e34093c:NG-WDRED-2TB-1' uuid: 1b34cda6-1cc8-4360-9ca6-4c209100aa48 Total devices 1 FS bytes used 200.08GiB devid 1 size 1.81TiB used 220.02GiB path /dev/md126 So from the RAID/Btrfs point of view, everything looks consistent: four md data arrays, four Btrfs filesystems, all mounted and in use. There is no extra md device and no Btrfs filesystem corresponding to the blue “RAID unknown” ghost NG-8TB-Seagate volume. What I’ve tried: - Using the GUI “Destroy” on the blue NG-WDRED-3TB-1 volume: it disappears briefly but comes back. - Running btrfs scrub on the real NG-8TB-Seagate volume. - Restarting services and rebooting; the ghost entries and the Inactive/Unprotected log spam persist. What I’m asking for: I’d like guidance on how to safely clean up the configuration/database so that the ghost NG-8TB-Seagate and NG-WDRED-3TB-1 volumes are removed from the ReadyNAS UI and stop generating volume-health events, without destroying the real md124/md125/md126/md127 arrays or their Btrfs filesystems. I’m comfortable with SSH and sqlite3 if needed, but I don’t know the internal ReadyNAS schema, so I’d really appreciate precise instructions like: - which DB file to open; - which table(s)/row(s) represent these phantom volumes; - exactly what to delete/change; - and which services to restart afterwards. I do have backups of the most critical data, but I’d obviously prefer not to wipe and rebuild the entire box just to clear two stale volume objects. Thanks in advance for any pointers.19Views0likes0CommentsRN104: ghost “NG-8TB-Seagate” volume (RAID unknown) flapping Inactive/Unprotected
Hi all, I have a ReadyNAS RN104 that’s working fine from the data point of view, but the volume configuration seems corrupted and is generating constant volume health alerts that I cannot clear. I’m hoping someone familiar with the ReadyNAS OS 6 config DB can advise on a safe way to remove the ghost volume entries without wiping any data. Hardware / firmware: Model: RN104 OS version: 6.10.10 Disks: sda: 2 TB (NG-WDRED-2TB-1) sdb: 6 TB (NG-WDRP-6TB-1) sdc: 3 TB (NG-WDRED-3TB-2) sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB Symptom: In the web UI → System → Volumes I see 6 volumes even though I only have 4 disks. The top four are green JBOD volumes with data and look healthy: NG-8TB-Seagate (JBOD, ~7.27 TB, ~2.64 TB used) NG-WDRED-3TB-2 (JBOD, ~2.72 TB, ~1.17 TB used) NG-WDRP-6TB-1 (JBOD, ~5.45 TB, ~4.28 TB used) NG-WDRED-2TB-1 (JBOD, ~1.81 TB, ~0.2 TB used) Below those, there are two blue entries with 0 data and “RAID unknown”: NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown) NG-8TB-Seagate (0 data, 0 free, RAID unknown) I believe these are stale/ghost volumes from the old failed 3 TB drive and some mis-step when I first added the 8 TB. They show only “Disk test” and “Destroy” as options. When I try “Destroy” on the old 3 TB entry, it appears to succeed but the entry comes straight back. In the logs I constantly get messages like: “Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected.” “Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive.” These repeat every few seconds/minutes and are clearly coming from the ghost NG-8TB-Seagate entry (the 0-data, RAID-unknown one), not the real 8 TB JBOD volume which is mounted and in use. SSH diagnostics (all arrays look clean): lsblk sda 1.8T ├─sda1 -> md0 (/) ├─sda2 -> md1 (swap) └─sda3 -> md126 /NG-WDRED-2TB-1 sdb 5.5T ├─sdb1 -> md0 (/) ├─sdb2 -> md1 (swap) └─sdb3 -> md127 /NG-WDRP-6TB-1 sdc 2.7T ├─sdc1 -> md0 ├─sdc2 -> md1 └─sdc3 -> md125 /NG-WDRED-3TB-2 sdd 7.3T ├─sdd1 -> md0 ├─sdd2 -> md1 └─sdd3 -> md124 /NG-8TB-Seagate /proc/mdstat md124 : active raid1 sdd3 7809175808 blocks super 1.2 [1/1] [U] md125 : active raid1 sdc3 2925415808 blocks super 1.2 [1/1] [U] md126 : active raid1 sda3 1948663808 blocks super 1.2 [1/1] [U] md127 : active raid1 sdb3 5855671808 blocks super 1.2 [1/1] [U] md1 : active raid10 sda2 sdd2 sdc2 sdb2 1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] md0 : active raid1 sda1 sdb1 sdd1 sdc1 4190208 blocks super 1.2 [4/4] [UUUU] /root/mdadm-detail-scan.txt ARRAY /dev/md/0 metadata=1.2 name=0e34093c:0 UUID=b1079eff:ca275c6a:4df7d648:6f176c9c ARRAY /dev/md/1 metadata=1.2 name=0e34093c:1 UUID=9ecdbab8:7ecf3da9:299f9966:0fa46d04 ARRAY /dev/md/NG-WDRP-6TB-1-0 metadata=1.2 name=0e34093c:NG-WDRP-6TB-1-0 UUID=1d40ffff:601db1f8:20e41e54:f5650fa6 ARRAY /dev/md/NG-WDRED-2TB-1-0 metadata=1.2 name=0e34093c:NG-WDRED-2TB-1-0 UUID=d69ab251:67e359ac:16c640ee:2a0409c0 ARRAY /dev/md/NG-WDRED-3TB-2-0 metadata=1.2 name=0e34093c:NG-WDRED-3TB-2-0 UUID=1c072ab5:ea01a5d6:646d6d07:76776925 ARRAY /dev/md/NG-8TB-Seagate-0 metadata=1.2 name=0e34093c:NG-8TB-Seagate-0 UUID=4a957007:c3c04e0b:0aacb1df:3a59d9e8 /root/btrfs-filesystems.txt Label: '0e34093c:NG-WDRP-6TB-1' uuid: 28fcc8ab-9e63-4529-83f4-1e9d4708bd1b Total devices 1 FS bytes used 4.27TiB devid 1 size 5.45TiB used 4.28TiB path /dev/md127 Label: '0e34093c:NG-8TB-Seagate' uuid: 2a912336-755a-48e6-bcee-fd373ae8e6df Total devices 1 FS bytes used 2.63TiB devid 1 size 7.27TiB used 2.64TiB path /dev/md124 Label: '0e34093c:NG-WDRED-3TB-2' uuid: fbd95853-4f22-4041-8583-4e0853decf9b Total devices 1 FS bytes used 1.17TiB devid 1 size 2.72TiB used 1.17TiB path /dev/md125 Label: '0e34093c:NG-WDRED-2TB-1' uuid: 1b34cda6-1cc8-4360-9ca6-4c209100aa48 Total devices 1 FS bytes used 200.08GiB devid 1 size 1.81TiB used 220.02GiB path /dev/md126 So from the RAID/Btrfs point of view, everything looks consistent: four md data arrays, four Btrfs filesystems, all mounted and in use. There is no extra md device and no Btrfs filesystem corresponding to the blue “RAID unknown” ghost NG-8TB-Seagate volume. What I’ve tried: Using the GUI “Destroy” on the blue NG-WDRED-3TB-1 volume: it disappears briefly but comes back. Running btrfs scrub on the real NG-8TB-Seagate volume. Restarting services and rebooting; the ghost entries and the Inactive/Unprotected log spam persist. What I’m asking for: I’d like guidance on how to safely clean up the configuration/database so that the ghost NG-8TB-Seagate and NG-WDRED-3TB-1 volumes are removed from the ReadyNAS UI and stop generating volume-health events, without destroying the real md124/md125/md126/md127 arrays or their Btrfs filesystems. I’m comfortable with SSH and sqlite3 if needed, but I don’t know the internal ReadyNAS schema, so I’d really appreciate precise instructions like: which DB file to open; which table(s)/row(s) represent these phantom volumes; exactly what to delete/change; and which services to restart afterwards. I do have backups of the most critical data, but I’d obviously prefer not to wipe and rebuild the entire box just to clear two stale volume objects. Thanks in advance for any pointers.11Views0likes0CommentsAccess ReadyNAS HDD via Win11/WSL2?
I used to have a ReadyNAS Pro2with two HDDs in RAID 1. I got rid of the HW a while ago (I would say 2019/20) but held on to the HDDs. Content was (mostly?) moved to a cloud service first. Today I tried to access it with a Sata-USB Adapter. I mounted it manually in WSL2, since it is from a Linux system. However, Ubuntu tells me it can not mount it because the Linux_Raid_filesystem is unknown. The drives are likely on OS 6.2, (if it is correct that it was released 2014). Anyone have an idea on what I can do? I looked online and in the forum but could not find a fitting answer. If there is, you can also just redirect me there, thank you. I have no HW for a true Linux setup, just a Surface Pro that probably will not run a USB-Linux. I assume. Reddit send me here but is interested in the answers as well ;) Thanks already for any reply.74Views0likes2CommentsWriting to the LCD (RN516)
Does anyone have any notes on how to enable hardware support or load kernel modules etc, in order to obtain a workable method to access the displays on ReadyNAS devices from within alternative linux OSs? Specifically I'm interested in RN516 and Pro6, but it would be great if we could start to share knowledge to help everyone with this particular issue. I've been running various different OSs on an RN516, and have stuff like fan control sorted. I've also sorted out a nasty ACPI issue which hogs a large chunk of CPU power due to interrupts from IRQ9. But, I've run out of talent so far on anything that lets me access the LCD hardware. Any pointers? And no, there's nothing in /dev like /dev/lcd or /dev/ttyS1 or similar that will do that job. Stuff that works inside ReadyNAS OS6 doesn't work in other linux OSs, presumably due to a lack of hardware support in the kernel, or the correct module. So far I've completely failed to work out how to resolve this. I managed for the fan/temperature/PWM hardware, but the LCD is kicking my butt.668Views0likes8CommentsReadyNAS internal Backup cannot create certain directories
Ready NAS 214 Firmware 6.10.10 Dear Community, I´ve got a problem during internal backup to a connected USB-Hardware-Drive. I save about 1500 files without any problem, but then I´ve got some files the system is not able to copy. The error message in the protocol is: "Cannot create directory XXX/XXX 2023/GA 3611 - 23 VU XXXX !" I`ve got a lot of these kind of files wich are all copied completley without any trouble. But 9 files have the problem and I don´t understand what it is. I`ve already checked if the name of the file is different to others, but I can´t see any problem. Maybe somebody facing the same issue and can help me how it was resolved. Kind regards615Views0likes4CommentsReadyNAS RN4220 wont boot up after shutdown from WebUI
Hello to anybody who can help, I got a ReadyNAS RN4220 out of eWaste from some company, and it worked perfectly! I have 100tb of NAS HDD storage in it and it has run fine for over a year now. Last week i decided to shut down my home rack gracefully just to give it a breather in the Australia summer heat. For the RN4220 i shut it down by using the Web Interface, telling it to turn off. It did so fine. A few days later, i tried turning it back on. It spins the fans to the highest settings and does not turn on. I have tried leaving it for a few hours overnight (approx. 8) to no avail. Does not even show up on RAIDar. What happens is it just sits there with the power LED blinking green and the network lights blinking green with fans on highest. If i force turn it off by holding down the power button the Red Health LED blinks once then goes away and the power LED goes back to amber as per normal. Again nothing has changed with the setup aside from me just shutting it down from the Web UI and turning it back on. I have done so in the past no issues. No clue whats changed. Here are some of the debugging steps i have tried: I cannot do an OS reinstall or Factory Reset as holding down the reset button for a minute on boot does not change the LEDs at all (as shown in the Hardware user manual, -> https://www.downloads.netgear.com/files/GDC/RN2120/ReadyNAS_OS6_Rackmount_HWM_EN.pdf the power LED, UI LED and Health LED should all be blinking, only the power LED blinks, assume it just isnt taking the reset command for some reason) Taking out all the disks and network cables and trying to boot up, no change Taking out all but 1 disk and booting up, no change I have tried removing both power cables (Its powered by wall power, not a UPS) and leaving it off overnight fully power cycling. no change. I have tried using the serial port on the back with a USB RS232 cable, a null modem cable (as its female port and i needed male-male) and Putty on a windows machine. Nothings happening on the console. Tried a few different baud rates to no avail. I have tried taking the top off, reseating the RAM and a few cables (Not all as im unsure what most of them do and they are quite tight in there) no change. I have tried taking out the CMOS battery from the Mobo and powering off the machine (unplugging) for a few hours. No change. I have tried running with just 1 of the PDUs on the back, it runs a red LED on the health indicator but otherwise no change. Please let me know if any additional info or photos can be useful for fixing this. I know its an older unit (if there are any newer replacements i could just take my drives and plug into with atleast 12 bays let me know, pref rack mount, and i can also look into that). I heard you can just swap drives between READYNAS products and they should work fine yeah? In terms of firmware on the device i think i installed the latest some time last year when i got it. But i cant access it to get the info anymore. I just am so confused why a simple reboot (done through the web UI!!!) would cause it to never boot up again.... Thanks for your time.756Views0likes9Comments