NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
BrainzUK
Mar 06, 2026Aspirant
RN104: ghost “NG-8TB-Seagate” volume (RAID unknown) Inactive/Unprotected
I have a ReadyNAS RN104 that’s working fine from the data point of view, but the volume configuration seems corrupted and is generating constant volume health alerts that I cannot clear.
I’m hoping someone familiar with the ReadyNAS OS 6 config DB can advise on a safe way to remove the ghost volume entries without wiping any data.
Hardware / firmware:
• Model: RN104
• OS version: 6.10.10
• Disks:
• sda: 2 TB (NG-WDRED-2TB-1)
• sdb: 6 TB (NG-WDRP-6TB-1)
• sdc: 3 TB (NG-WDRED-3TB-2)
• sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB
Symptom:
In the web UI → System → Volumes I see 6 volumes even though I only have 4 disks. The top four are green JBOD volumes with data and look healthy:
• NG-8TB-Seagate (JBOD, ~7.27 TB, ~2.64 TB used)
• NG-WDRED-3TB-2 (JBOD, ~2.72 TB, ~1.17 TB used)
• NG-WDRP-6TB-1 (JBOD, ~5.45 TB, ~4.28 TB used)
• NG-WDRED-2TB-1 (JBOD, ~1.81 TB, ~0.2 TB used)
Below those, there are two blue entries with 0 data and “RAID unknown”:
• NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown)
• NG-8TB-Seagate (0 data, 0 free, RAID unknown)
18 Replies
- SandsharkSensei
I think the EXPORT and removal before deletion of the duplicate is likely the best plan. One thing to be aware of is that the export is going to move the primary drive to another one, which is one reason I didn't suggest that right off.
If something goes sideways with the deletion, an exported drive can be put in the NAS by itself and the contents can be read.
But the big question is which one to export and which to destroy, it does it not matter.
There are some things to be aware of regarding an EXPORT. The companion IMPORT is an automatic process when the NAS is booted with an exported drive, there is no IMPORT command. Thus, you must remove the drive before re-booting if you want it to remain exported and you just put it in with power off and boot to get it re-added. One a drive is imported, it's no longer marked as exported -- you have to EXPORT again if it's your desire to remove it again. A drive not marked as EXPORTED cannot be imported. It should be mountable via SSH, but I'm not sure if the OS would then recognize it.
- StephenBGuru - Experienced User
Sandshark wrote:
I think the EXPORT and removal before deletion of the duplicate is likely the best plan. One thing to be aware of is that the export is going to move the primary drive to another one, which is one reason I didn't suggest that right off.
That ship already sailed, the new SMR Seagate is now the primary drive. Personally I think it makes sense to shift that to one of the CMR drives.
FWIW, maybe also put a different drive in slot 1 while the NAS is powered down. Then the NAS won't be booting from the Seagate.
Sandshark wrote:
But the big question is which one to export and which to destroy,
I suggest exporting the one that says "(JBOD, ~7.27 TB, ~2.64 TB used)". Then power down, remove the Seagate, put one of the other drives in slot 1.
Reboot, and if you still see the "0 data" volume, delete it. Power down, reinstall the Seagate, and power up again.
There is some risk, so the data on the Seagate should be backed up first (at least files that are not replaceable).
- SandsharkSensei
StephenB wrote:
Sandshark wrote:
I think the EXPORT and removal before deletion of the duplicate is likely the best plan. One thing to be aware of is that the export is going to move the primary drive to another one, which is one reason I didn't suggest that right off.
That ship already sailed, the new SMR Seagate is now the primary drive. Personally I think it makes sense to shift that to one of the CMR drivesActually, it hasn't. When you export the primary drive, it moves the primary to another one.
- SandsharkSensei
Well, the spin-down possibility was worth pursuing. If the removed drive still appears in the GUI, it's safe to DESTROY it once you are sure you never intend to put the drive back in. If the drive were in better shape, I'd suggest returning it to the NAS and EXPORTing it. But given its condition and that an EXPORT does write to it, that probably a bad idea here.
- StephenBGuru - Experienced User
Sandshark wrote:
Well, the spin-down possibility was worth pursuing.
Agreed.
I'm more concerned with the rapid toggling of the health status on the new disk volume than I am about the old NG-WDRED-3TB-1 volume. It is odd that this is also listed twice - and that might be part of the problem. But deleting that duplicate is risky.
BrainzUK - You could try deleting the duplicate Seagate ("0 data") volume on the volume page, and see what happens. Or you could try exporting it, removing the disk, and then deleting the duplicate. Then power down the NAS, reinsert the Seagate, and power up again. That might be safer (note the second procedure will result in one of the remaining volumes becoming primary).
Either way, it would be safest to make a backup of the data on the Seagate first.
- SandsharkSensei
Based on your and StephenB 's posts, it looks like the issue with the 3TB phantom should now be gone. Is that the case? mdstat shows the right MDADM configuration, at least at the point you made the log, for all four drives. But with the status toggling, that may not always be the case.
Do you have drive spin-down enabled? If so, try turning it off. It's a wild guess, but easy enough to try. If that does make a difference, then the issue may be your power supply not quite having enough power to spin up the drives in the allocated time.
StephenB , do you see any spin-down entries in the logs? And if so, do they correlate with the status toggling?
- StephenBGuru - Experienced User
Sandshark wrote:
do you see any spin-down entries in the logs?
A few. 6 March entries are
Mar 06 12:09:57 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 3 (/dev/sdb). Mar 06 12:10:02 BrAinZ-NAS104 noflushd[2321]: Spindown of disk 3 (/dev/sdb) cancelled. Mar 06 12:37:08 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 3 (/dev/sdb). Mar 06 12:37:11 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 2 (/dev/sdc). Mar 06 12:37:13 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 1 (/dev/sdd). Mar 06 12:39:37 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 4 (/dev/sda). Mar 06 12:55:51 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 4 (/dev/sda) after 0:16:11. Mar 06 12:55:55 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 3 (/dev/sdb) after 0:18:44. Mar 06 12:55:55 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 2 (/dev/sdc) after 0:18:42. Mar 06 12:56:00 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 1 (/dev/sdd) after 0:18:44. Mar 06 13:16:01 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 4 (/dev/sda). Mar 06 13:16:05 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 3 (/dev/sdb). Mar 06 13:16:07 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 2 (/dev/sdc). Mar 06 13:16:09 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 1 (/dev/sdd). Mar 06 14:10:34 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 4 (/dev/sda) after 0:54:29. Mar 06 14:10:39 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 2 (/dev/sdc) after 0:54:30. Mar 06 14:30:37 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 4 (/dev/sda). Mar 06 14:30:44 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 2 (/dev/sdc). Mar 06 14:57:40 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 4 (/dev/sda) after 0:27:01. Mar 06 15:17:42 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 4 (/dev/sda). Mar 06 15:30:16 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 4 (/dev/sda) after 0:12:31. Mar 06 15:50:19 BrAinZ-NAS104 noflushd[2321]: Spinning down disk 4 (/dev/sda). Mar 06 16:00:46 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 4 (/dev/sda) after 0:10:25. Mar 06 16:00:47 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 3 (/dev/sdb) after 2:44:39. Mar 06 16:00:57 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 2 (/dev/sdc) after 1:30:11. Mar 06 16:00:57 BrAinZ-NAS104 noflushd[2321]: Spinning up disk 1 (/dev/sdd) after 2:44:45.Sandshark wrote:
And if so, do they correlate with the status toggling?
No. The status is toggling much faster. This snippet is toggling while SDD is spun down, and continues after it spins up at 16:00:57.
[26/03/06 15:59:59 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:00:10 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:00:12 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:00:23 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:00:24 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:00:36 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:00:37 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:00:48 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:00:50 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:01:01 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:01:03 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:01:16 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:01:17 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:01:28 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:01:30 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:01:41 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:03:19 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:03:30 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:03:32 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:03:43 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:03:44 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:03:55 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:03:57 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:04:08 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:04:10 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/06 16:04:22 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/06 16:04:24 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. - BrainzUKAspirant
Although it doesn't seem to be causing any direct issues, the old 3TB is still showing in the GUI Volumes list.
I am running a disk test on the 8TB and have also disabled the spin down options, but the errors are still there currently :(
- StephenBGuru - Experienced User
BrainzUK wrote:
NG-WDRED-3TB-1 (0 data, 0 free, RAID unknown)
This was the old 3 TB drive (which was primary).
[26/03/05 20:45:16 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/05 20:45:20 GMT] info:volume:LOGMSG_DELETE_VOLUME Volume NG-WDRED-3TB-1 was deleted from the system. [26/03/05 20:45:29 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Inactive to Unprotected. [26/03/05 20:45:32 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive. [26/03/05 20:45:34 GMT] info:volume:LOGMSG_DELETE_VOLUME_NEW_HOME_FOLDER Home folders are newly created on volume NG-8TB-Seagate. [26/03/05 20:45:47 GMT] notice:volume:LOGMSG_HEALTH_VOLUME Volume NG-8TB-Seagate health changed from Unprotected to Inactive.This was deleted after the new Seagate was installed - I can't tell if you did that manually or if the system did it on its own.
As I mentioned in the PM, several logs are flooded with the 8 TB volume health toggling between inactive and unprotected. I can't see the underlying errors that are causing that.
If the 3 TB drive you replaced was healthy, then I'd suggest switching back to that temporarily. But it's not - over 20 thousand ATA errors, and 76 pending sectors.
I think the path forward is to replace the Seagate with an Ironwolf (though unfortunately you might be past the return window). But I think testing the drive first would be useful.
- SandsharkSensei
OS6's JBOD volumes are actually MDADM single-drive RAID1 volumes where the OS simply does not tell you that the RAID is "degraded" (not redundant). It appears that something is amiss with that process on your NAS. There are some SSH commands that may resolve that, but getting the information StephenB has requested is needed to determine if those are the right ones in your case. The content of mdstat.log will be especially telling.
In addition, did you have the "phantom" 3TB volume before you swapped out the other 3TB for the current 8TB? Do you know which was your "primary" drive (normally, the oldest) before you did the swap?
- BrainzUKAspirant
This is what the mdstat.log shows...
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md124 : active raid1 sdd3[0]
7809175808 blocks super 1.2 [1/1] [U]
md125 : active raid1 sdc3[0]
2925415808 blocks super 1.2 [1/1] [U]
md126 : active raid1 sda3[0]
1948663808 blocks super 1.2 [1/1] [U]
md127 : active raid1 sdb3[0]
5855671808 blocks super 1.2 [1/1] [U]
md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
1044480 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
md0 : active raid1 sda1[0] sdb1[3] sdd1[4] sdc1[1]
4190208 blocks super 1.2 [4/4] [UUUU]
unused devices: <none>
/dev/md/0:
Version : 1.2
Creation Time : Fri Nov 6 15:01:52 2020
Raid Level : raid1
Array Size : 4190208 (4.00 GiB 4.29 GB)
Used Dev Size : 4190208 (4.00 GiB 4.29 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Fri Mar 6 16:04:19 2026
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 0e34093c:0 (local to host 0e34093c)
UUID : b1079eff:ca275c6a:4df7d648:6f176c9c
Events : 15605
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 17 3 active sync /dev/sdb1
/dev/md/1:
Version : 1.2
Creation Time : Thu Feb 12 16:44:04 2026
Raid Level : raid10
Array Size : 1044480 (1020.00 MiB 1069.55 MB)
Used Dev Size : 522240 (510.00 MiB 534.77 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Thu Mar 5 15:43:57 2026
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Consistency Policy : unknown
Name : 0e34093c:1 (local to host 0e34093c)
UUID : 9ecdbab8:7ecf3da9:299f9966:0fa46d04
Events : 19
Number Major Minor RaidDevice State
0 8 2 0 active sync set-A /dev/sda2
1 8 18 1 active sync set-B /dev/sdb2
2 8 34 2 active sync set-A /dev/sdc2
3 8 50 3 active sync set-B /dev/sdd2
/dev/md/NG-8TB-Seagate-0:
Version : 1.2
Creation Time : Thu Feb 12 16:44:07 2026
Raid Level : raid1
Array Size : 7809175808 (7447.41 GiB 7996.60 GB)
Used Dev Size : 7809175808 (7447.41 GiB 7996.60 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Fri Mar 6 11:18:02 2026
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 0e34093c:NG-8TB-Seagate-0 (local to host 0e34093c)
UUID : 4a957007:c3c04e0b:0aacb1df:3a59d9e8
Events : 2
Number Major Minor RaidDevice State
0 8 51 0 active sync /dev/sdd3
/dev/md/NG-WDRED-2TB-1-0:
Version : 1.2
Creation Time : Fri Nov 6 20:50:32 2020
Raid Level : raid1
Array Size : 1948663808 (1858.39 GiB 1995.43 GB)
Used Dev Size : 1948663808 (1858.39 GiB 1995.43 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Fri Mar 6 12:10:00 2026
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 0e34093c:NG-WDRED-2TB-1-0 (local to host 0e34093c)
UUID : d69ab251:67e359ac:16c640ee:2a0409c0
Events : 2
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
/dev/md/NG-WDRED-3TB-2-0:
Version : 1.2
Creation Time : Fri Nov 6 20:49:32 2020
Raid Level : raid1
Array Size : 2925415808 (2789.89 GiB 2995.63 GB)
Used Dev Size : 2925415808 (2789.89 GiB 2995.63 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Fri Mar 6 12:10:00 2026
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 0e34093c:NG-WDRED-3TB-2-0 (local to host 0e34093c)
UUID : 1c072ab5:ea01a5d6:646d6d07:76776925
Events : 2
Number Major Minor RaidDevice State
0 8 35 0 active sync /dev/sdc3
/dev/md/NG-WDRP-6TB-1-0:
Version : 1.2
Creation Time : Tue Jul 18 15:55:20 2023
Raid Level : raid1
Array Size : 5855671808 (5584.40 GiB 5996.21 GB)
Used Dev Size : 5855671808 (5584.40 GiB 5996.21 GB)
Raid Devices : 1
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Fri Mar 6 11:18:27 2026
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : unknown
Name : 0e34093c:NG-WDRP-6TB-1-0 (local to host 0e34093c)
UUID : 1d40ffff:601db1f8:20e41e54:f5650fa6
Events : 2
Number Major Minor RaidDevice State
0 8 19 0 active sync /dev/sdb3
- BrainzUKAspirant
Thanks, just creating the log zip now.
The "Phantom" 3TB drive I'm guessing is actually my old 3TB Western Digital drive that started to fail, so I replaced that with my new 8TB Seagate and copied all the old files over to that (apart from a handful I lost die to the impending failure).
When I was setting up the new 8TB I think I did something wrong as I thought I had created a new volume, then couldn't see it so repeated the process. Not sure what I did, but that may have something to do with it?
I'm not sure what my "primary" drive would be, how could I see that info please?- StephenBGuru - Experienced User
BrainzUK wrote:
I'm not sure what my "primary" drive would be, how could I see that info please?
First, what it is...
The "primary" RAID group (drive since you use jbod) is the one that hosts your apps and home folders.
In your case, mounts.log shows
/dev/md124 on /home type btrfs (rw,noatime,nodiratime,nospace_cache,subvolid=2962,subvol=/home)
/dev/md124 on /apps type btrfs (rw,noatime,nodiratime,nospace_cache,subvolid=2961,subvol=/.apps)
mdstat.log shows md124 is sdd3 (partition 3 on drive sdd)
md124 : active raid1 sdd3[0]
7809175808 blocks super 1.2 [1/1] [U]
And disk_info.log tells us that this is your newly added 8 TB drive.
Device: sdd
Controller: 0
Channel: 0
Model: ST8000DM004-2U9188
Serial: WSC3064D
One consequence here is that any home folders and apps would have been lost when you replaced this drive.
Note this is an SMR drive, which isn't well suited for RAID. Fortunately you are using JBOD. Still you likely will find that you will get poor performance with sustained writes.
An Ironwolf would have been a better choice.
More later.
- StephenBGuru - Experienced User
BrainzUK wrote:
sdd: 8 TB (NG-8TB-Seagate) – recently replaced a failed 3 TB
What process did you use?
I suggest downloading the full log zip from the logs page, and looking at the details. If you need help sorting out the logs, you can put the zip in cloud storage and send me a private message (PM) with a link. Make sure the permissions are set so anyone with the link can download. You can send a PM using the envelope link in the upper right hand corner of forum.
- BrainzUKAspirant
These are the errors that keep constantly showing:
Mar 06, 2026 11:23:08 AM Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive. Mar 06, 2026 11:23:07 AM Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected. Mar 06, 2026 11:22:56 AM Volume: Volume NG-8TB-Seagate health changed from Unprotected to Inactive. Mar 06, 2026 11:22:54 AM Volume: Volume NG-8TB-Seagate health changed from Inactive to Unprotected.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!