NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

BrainzUK's avatar
BrainzUK
Aspirant
Mar 06, 2026

RN104: ghost “NG-8TB-Seagate” volume (RAID unknown) flapping Inactive/Unprotected

Hi all,

 

I have a ReadyNAS RN104 that’s working fine from the data point of view, but the volume configuration seems corrupted and is generating constant volume health alerts that I cannot clear. I’m hoping someone familiar with the ReadyNAS OS 6 config DB can advise on a safe way to remove the ghost volume entries without wiping any data.

 

Hardware / firmware:

- Model: RN104

- OS version: 6.10.10

- Disks:

- sda: 2 TB (NG-WDRED-2TB-1)

- sdb: 6 TB (NG-WDRP-6TB-1)

- sdc: 3 TB (NG-WDRED-3TB-2)

- sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB

 

Symptom:

In the web UI → System → Volumes I see 6 volumes even though I only have 4 disks. The top four are green JBOD volumes with data and look healthy:

- NG-8TB-Seagate (JBOD, ~7.27 TB, ~2.64 TB used)

- NG-WDRED-3TB-2 (JBOD, ~2.72 TB, ~1.17 TB used)

- NG-WDRP-6TB-1 (JBOD, ~5.45 TB, ~4.28 TB used)

- NG-WDRED-2TB-1 (JBOD, ~1.81 TB, ~0.2 TB used)

 

Below those, there are two blue entries with 0 data and “RAID unknown”:

- NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown)

- NG-8TB-Seagate (0 data, 0 free, RAID unknown)

 

I believe these are stale/ghost volumes from the old failed 3 TB drive and some mis-step when I first added the 8 TB. They show only “Disk test” and “Destroy” as options. When I try “Destroy” on the old 3 TB entry, it appears to succeed but the entry comes straight back.

 

In the logs I constantly get messages like:

- “Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected.”

- “Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive.”

 

These repeat every few seconds/minutes and are clearly coming from the ghost NG-8TB-Seagate entry (the 0-data, RAID-unknown one), not the real 8 TB JBOD volume which is mounted and in use.

 

SSH diagnostics (all arrays look clean):

 

lsblk

sda 1.8T

├─sda1 -> md0 (/)

├─sda2 -> md1 (swap)

└─sda3 -> md126 /NG-WDRED-2TB-1

sdb 5.5T

├─sdb1 -> md0 (/)

├─sdb2 -> md1 (swap)

└─sdb3 -> md127 /NG-WDRP-6TB-1

sdc 2.7T

├─sdc1 -> md0

├─sdc2 -> md1

└─sdc3 -> md125 /NG-WDRED-3TB-2

sdd 7.3T

├─sdd1 -> md0

├─sdd2 -> md1

└─sdd3 -> md124 /NG-8TB-Seagate

 

/proc/mdstat

md124 : active raid1 sdd3

7809175808 blocks super 1.2 [1/1] [U]

 

md125 : active raid1 sdc3

2925415808 blocks super 1.2 [1/1] [U]

 

md126 : active raid1 sda3

1948663808 blocks super 1.2 [1/1] [U]

 

md127 : active raid1 sdb3

5855671808 blocks super 1.2 [1/1] [U]

 

md1 : active raid10 sda2 sdd2 sdc2 sdb2[1][2][3]

1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

 

md0 : active raid1 sda1 sdb1 sdd1 sdc1[3][4][1]

4190208 blocks super 1.2 [4/4] [UUUU]

 

/root/mdadm-detail-scan.txt

ARRAY /dev/md/0 metadata=1.2 name=0e34093c:0 UUID=b1079eff:ca275c6a:4df7d648:6f176c9c

ARRAY /dev/md/1 metadata=1.2 name=0e34093c:1 UUID=9ecdbab8:7ecf3da9:299f9966:0fa46d04

ARRAY /dev/md/NG-WDRP-6TB-1-0 metadata=1.2 name=0e34093c:NG-WDRP-6TB-1-0 UUID=1d40ffff:601db1f8:20e41e54:f5650fa6

ARRAY /dev/md/NG-WDRED-2TB-1-0 metadata=1.2 name=0e34093c:NG-WDRED-2TB-1-0 UUID=d69ab251:67e359ac:16c640ee:2a0409c0

ARRAY /dev/md/NG-WDRED-3TB-2-0 metadata=1.2 name=0e34093c:NG-WDRED-3TB-2-0 UUID=1c072ab5:ea01a5d6:646d6d07:76776925

ARRAY /dev/md/NG-8TB-Seagate-0 metadata=1.2 name=0e34093c:NG-8TB-Seagate-0 UUID=4a957007:c3c04e0b:0aacb1df:3a59d9e8

 

/root/btrfs-filesystems.txt

Label: '0e34093c:NG-WDRP-6TB-1' uuid: 28fcc8ab-9e63-4529-83f4-1e9d4708bd1b

Total devices 1 FS bytes used 4.27TiB

devid 1 size 5.45TiB used 4.28TiB path /dev/md127

 

Label: '0e34093c:NG-8TB-Seagate' uuid: 2a912336-755a-48e6-bcee-fd373ae8e6df

Total devices 1 FS bytes used 2.63TiB

devid 1 size 7.27TiB used 2.64TiB path /dev/md124

 

Label: '0e34093c:NG-WDRED-3TB-2' uuid: fbd95853-4f22-4041-8583-4e0853decf9b

Total devices 1 FS bytes used 1.17TiB

devid 1 size 2.72TiB used 1.17TiB path /dev/md125

 

Label: '0e34093c:NG-WDRED-2TB-1' uuid: 1b34cda6-1cc8-4360-9ca6-4c209100aa48

Total devices 1 FS bytes used 200.08GiB

devid 1 size 1.81TiB used 220.02GiB path /dev/md126

 

So from the RAID/Btrfs point of view, everything looks consistent: four md data arrays, four Btrfs filesystems, all mounted and in use. There is no extra md device and no Btrfs filesystem corresponding to the blue “RAID unknown” ghost NG-8TB-Seagate volume.

 

What I’ve tried:

- Using the GUI “Destroy” on the blue NG-WDRED-3TB-1 volume: it disappears briefly but comes back.

- Running btrfs scrub on the real NG-8TB-Seagate volume.

- Restarting services and rebooting; the ghost entries and the Inactive/Unprotected log spam persist.

 

What I’m asking for:

I’d like guidance on how to safely clean up the configuration/database so that the ghost NG-8TB-Seagate and NG-WDRED-3TB-1 volumes are removed from the ReadyNAS UI and stop generating volume-health events, without destroying the real md124/md125/md126/md127 arrays or their Btrfs filesystems.

 

I’m comfortable with SSH and sqlite3 if needed, but I don’t know the internal ReadyNAS schema, so I’d really appreciate precise instructions like:

- which DB file to open;

- which table(s)/row(s) represent these phantom volumes;

- exactly what to delete/change;

- and which services to restart afterwards.

 

I do have backups of the most critical data, but I’d obviously prefer not to wipe and rebuild the entire box just to clear two stale volume objects.

 

Thanks in advance for any pointers.

No RepliesBe the first to reply