NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

shaneswartz's avatar
shaneswartz
Aspirant
Nov 28, 2025

ReadyNAS Ultra 6 Plus No volume after replacing failed drive

I have a ReadyNAS Ultra 6 Plus (RNDP600U) with 6 x 3TB drives.  It is running RAIDiator version 4.2.31.

 

I thought that I had created a backup of the media and backup volumes, but either I did not complete that task or I cannot find the USB drive I was using for that.  That is my fault.

 

Recently, I found that drive #2 had failed.  I replaced it with a new drive and the system started a rebuild.  The rebuild failed.  Now when I try to access the media or backup volumes I am told there is no volume.

 

The ReadyNAS boots ok, but when I first attempt to access the WebUI for the admin area a popup window says:

 

The paths for the shares listed below could not be found.  Typically, this occurs when the ReadyNAS is unable to access the data volume.  media backup

 

When I access the WebUI for the shares it states, "No shares currently accessible."  I tried reinstalling RAIDiator version 2.4.31

 

Additionally, /dev/c/c does not exist nor does /dev/c.

 

The UI for RAIDar shows the status as "Healthy" and all six drives are operational and ok.

 

The logs for the ReadyNAS show:

Severity Date Message 
 Fri Nov 28 20:55:55 MST 2025System is up.
 Fri Nov 28 20:55:54 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Fri Nov 28 20:55:37 MST 2025Volume scan failed to run properly.
 Fri Nov 28 13:43:25 MST 2025The default backup button job copies the contents of the [backup] share to the USB hard drive directly attached to the front of the NAS. Please attach a USB hard drive directly to the front USB port before pressing the backup button.
 Fri Nov 28 13:41:30 MST 2025System is up.
 Fri Nov 28 13:41:29 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Fri Nov 28 13:41:12 MST 2025Volume scan failed to run properly.
 Fri Nov 28 12:45:27 MST 2025Powering off device
 Fri Nov 28 12:45:27 MST 2025Please close this browser session and use RAIDar to reconnect to the device after it is powered back on. System powering off...
 Tue Nov 25 22:57:39 MST 2025System is up.
 Tue Nov 25 22:57:38 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Tue Nov 25 22:57:21 MST 2025Successfully enabled root SSH access. The root password is now the same as your admin password.
 Tue Nov 25 22:57:19 MST 2025Volume scan failed to run properly.
 Tue Nov 25 22:56:26 MST 2025Please close this browser session and use RAIDar to reconnect to the device. System rebooting...
 Tue Nov 25 21:27:49 MST 2025System is up.
 Tue Nov 25 21:27:48 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Tue Nov 25 21:27:31 MST 2025Volume scan failed to run properly.
 Tue Nov 25 21:26:38 MST 2025Rebooting device...
 Tue Nov 25 21:26:38 MST 2025Please close this browser session and use RAIDar to reconnect to the device. System rebooting...
 Mon Nov 24 21:44:17 MST 2025Successfully started FTP service.
 Mon Nov 17 22:09:41 MST 2025System is up.
 Mon Nov 17 22:09:41 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Mon Nov 17 22:09:28 MST 2025Your ReadyNAS device has been updated with a new firmware image. (RAIDiator-x86 4.2.31)
 Mon Nov 17 22:09:26 MST 2025Volume scan failed to run properly.
 Mon Nov 17 22:07:29 MST 2025Please close this browser session and use RAIDar to reconnect to the device. System rebooting...
 Mon Nov 17 21:45:32 MST 2025Fan will be recalibrated over the next few minutes.
 Mon Nov 17 21:41:07 MST 2025HTTP service restarted.
 Mon Nov 17 21:38:18 MST 2025System is up.
 Mon Nov 17 21:38:17 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Mon Nov 17 21:38:00 MST 2025Volume scan failed to run properly.
 Mon Nov 17 21:36:58 MST 2025Please close this browser session and use RAIDar to reconnect to the device. System rebooting...
 Mon Nov 17 21:32:56 MST 2025ReadyDLNA media file rescan completed.
 Mon Nov 17 21:32:56 MST 2025ReadyDLNA media file rescan started.
 Mon Nov 17 21:31:52 MST 2025Alert settings saved.
 Mon Nov 17 21:30:08 MST 2025Fan will be recalibrated over the next few minutes.
 Mon Nov 17 21:16:53 MST 2025New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 1]
 Mon Nov 17 21:16:36 MST 2025A disk was removed from the ReadyNAS.
 Mon Nov 17 21:16:36 MST 2025Disk removal detected. [Disk 1]
 Mon Nov 17 21:04:20 MST 2025System is up.
 Mon Nov 17 21:04:19 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Mon Nov 17 21:04:02 MST 2025Volume scan failed to run properly.
 Wed Nov 5 08:51:12 MST 2025System is up.
 Wed Nov 5 08:51:12 MST 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Wed Nov 5 08:51:05 MST 2025Volume scan failed to run properly.
 Tue Sep 30 21:39:42 MDT 2025System is up.
 Tue Sep 30 21:39:41 MDT 2025The paths for the shares listed below could not be found. Typically, this occurs when the ReadyNAS is unable to access the data volume. media backup
 Tue Sep 30 21:24:22 MDT 2025Volume scan failed to run properly.
 Tue Sep 30 21:38:07 MDT 2025System is up.
 Tue Sep 30 19:32:32 MDT 2025RAID sync started on volume C.
 Tue Sep 30 19:32:07 MDT 2025Data volume will be rebuilt with disk 2.
 Tue Sep 30 19:30:00 MDT 2025New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 2]
 Tue Sep 30 19:25:56 MDT 2025A disk was removed from the ReadyNAS. One or more RAID volumes are currently unprotected, and an additional disk failure or removal may result in data loss. Please add a replacement disk as soon as possible.
 Tue Sep 30 19:25:56 MDT 2025Disk removal detected. [Disk 2]
 Tue Sep 30 01:25:32 MDT 2025Data volume will be rebuilt with disk 2.
 Tue Sep 30 01:20:04 MDT 2025System is up.
 Mon Sep 29 22:05:58 MDT 2025Fan will be recalibrated over the next few minutes.
 Mon Sep 29 00:06:29 MDT 2025The on-line filesystem consistency check completed without errors for Volume C.
 Mon Sep 29 00:00:01 MDT 2025The on-line filesystem consistency check has started for Volume C.
 

 

I install the app to allow SSH via root to the ReadyNAS.  When I login via SSH I see the following:

 

# ls -l /dev/md*

brw-rw---- 1 root disk 9, 0 2025-11-28 21:05 /dev/md0
brw-rw---- 1 root disk 9, 1 2025-11-28 21:05 /dev/md1

 

/dev/md:
total 0
lrwxrwxrwx 1 root root    6 2025-11-28 21:05 0 -> ../md0
lrwxrwxrwx 1 root root    6 2025-11-28 21:05 1 -> ../md1
brw------- 1 root root 9, 2 2025-11-28 21:05 2


# readynas:~# mdadm --detail /dev/md2
mdadm: cannot open /dev/md2: No such file or directory


# readynas:~# mdadm --detail /dev/md/2
mdadm: md device /dev/md/2 does not appear to be active.

 

# readynas:~# vgscan  
  Reading all physical volumes.  This may take a while...
  No volume groups found


# readynas:~# vgchange -ay
  No volume groups found

 

# lvm pvs

No output

 

# lvm vgs

No volume groups found

 

# lvm lvs

No volume groups found

 

# readynas:~# mdadm /dev/md/2
/dev/md/2: is an md device which is not active

 

I was able to boot the ReadyNAS into Tech Support mode and access the command line.

 

On the command line while logged on in Tech Support mode I see the following information.

 

# ls -l /dev/md*
lrwxrwxrwx    1         4 Nov 29 03:04 /dev/md0 -> md/0
lrwxrwxrwx    1         4 Nov 29 03:04 /dev/md1 -> md/1
lrwxrwxrwx    1         4 Nov 29 03:04 /dev/md2 -> md/2

/dev/md:
brw-------    1    9,   0 Nov 29 03:04 0
brw-------    1    9,   1 Nov 29 03:04 1
brw-------    1    9,   2 Nov 29 03:04 2

 

# /bin/start_raid.sh
mdadm: /dev/md/0 has been started with 6 drives.
mdadm: /dev/md/1 has been started with 6 drives.
mdadm: failed to RUN_ARRAY /dev/md/2: Input/output error
mdadm: failed to RUN_ARRAY /dev/md/2: Input/output error

 

# vgscan -v
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Reading all physical volumes.  This may take a while...
    Finding all volume groups
  No volume groups found

 

# vgchange -ay
  No volume groups found

 

# lvm pvs

No output

 

# lvm vgs

No volume groups found

 

# lvm lvs

No volume groups found

 

# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.02
  Creation Time : Sun Jan 20 05:15:26 2013
     Raid Level : raid5
  Used Dev Size : -1
   Raid Devices : 6
  Total Devices : 6
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Wed Oct  1 03:38:34 2025
          State : active, degraded, Not Started
 Active Devices : 5
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : 001F33EAFC49:2
           UUID : 9e31c8bc:f2765365:abe63c40:f5037492
         Events : 42539

    Number   Major   Minor   RaidDevice State
       6       8        3        0      active sync   /dev/sda3
       8       8       19        1      spare rebuilding   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       3       8       51        3      active sync   /dev/sdd3
       4       8       67        4      active sync   /dev/sde3
       7       8       83        5      active sync   /dev/sdf3

 

Why would sdb3 still be trying to rebuilding when the ReadyNAS believes it is done and operational according to the information seen in "RAIDar"?  The logs state the sync of drive 2  the Should I pull out /dev/sdb (Drive #2) and put it back in to restart the rebuild?

 

Running "mdadm --detail /dev/md0" and "mdadm - -detail /dev/md1" do not show any issues and all drive partitions are active.

 

Note: The second time back logged in under Tech Support mode running "mdadm --detail /dev/md2" states "mdadm: md device /dev/md2 does not appear to be active."  I cannot repeat getting md2 apparently active to show that /dev/sdb3 is rebuilding.

 

Does anyone have an idea how I fix this to recover access to the data?  Or, do I need to acquire an enclosure to put the drives in there and attach to a Windows system to attempt data recovery with COTS software?

 

Any help is greatly appreciated.

6 Replies

  • Correcting one paragraph:

    Old paragraph:

    Why would sdb3 still be trying to rebuilding when the ReadyNAS believes it is done and operational according to the information seen in "RAIDar"?  The logs state the sync of drive 2  the Should I pull out /dev/sdb (Drive #2) and put it back in to restart the rebuild?

     

    New paragraph:

     

    Why would sdb3 still be trying to rebuilding when the ReadyNAS believes it is done and operational according to the information seen in "RAIDar"?  The logs state the sync of drive 2 was started, the system was up a little later, but then it could not see the volume.   Pulling drive #2 (/dev/sdb) out and putting it back in did not restart the sync.  The drive still shows as operational and working in RAIDar.

    • StephenB's avatar
      StephenB
      Guru - Experienced User
      shaneswartz wrote:

      Why would sdb3 still be trying to rebuilding when the ReadyNAS believes it is done and operational according to the information seen in "RAIDar"?  The logs state the sync of drive 2 was started, the system was up a little later, but then it could not see the volume.   Pulling drive #2 (/dev/sdb) out and putting it back in did not restart the sync.  The drive still shows as operational and working in RAIDar.

      Hard to say what happened, but I think I would have let the array rebuild in tech support mode, and not pull and reinsert the drive.

       

      Have you tried to mount md2?

  • I had not tried mounting md2.  I just logged back in under Tech Support mode to attempt that.  Had to run start_raid.sh first to create /dev/md[0-2].  Attempts to mount md2 and md/2 all failed with the output shown below.

    # mount /dev/md2 /mnt/md2  
    mount: mounting /dev/md2 on /mnt/md2 failed: Input/output error  
                                
    # mount /dev/md/2 /mnt/md2                                        
    mount: mounting /dev/md/2 on /mnt/md2 failed: Input/output error  
                                                                   
    # mount -t ext4 /dev/md2 /mnt/md2                              
    mount: mounting /dev/md2 on /mnt/md2 failed: Invalid argument  
     
    # mount -t ext4 /dev/md/2 /mnt/md2      
    mount: mounting /dev/md/2 on /mnt/md2 failed: Invalid argument   

     

    Have you had any experience or know about a 6 bay SATA enclosure made by Cenmate?  Do you know any any other 6 bay SATA enclosures that you can recommend?  Also, any experience or knowledge with the COTS software named "Stellar Data Recovery Technician" or "Hetman RAID Recovery"?

    • StephenB's avatar
      StephenB
      Guru - Experienced User
      shaneswartz wrote:

      # mount /dev/md2 /mnt/md2  
      mount: mounting /dev/md2 on /mnt/md2 failed: Input/output error  

      The usual sequence is 

      start_raid.sh
      mount /dev/md0 /sysroot
      mount --bind /proc /sysroot/proc
      mount --bind /dev /sysroot/dev
      mount --bind /dev/pts /sysroot/dev/pts
      mount --bind /sys /sysroot/sys
      chroot /sysroot /bin/bash

       

      You can then attempt to mount the data volume (read-only) using

      vgscan
      vgchange -a y
      mount -o ro /dev/c/c /c

  • Thanks for the updates on those commands and your assistance so far. I was able to go through those commands, but still /dev/md2 is not active.

    Output from running "start_raid.sh":

    # start_raid.sh
    mdadm: /dev/md/0 has been started with 6 drives.
    mdadm: /dev/md/1 has been started with 6 drives.
    mdadm: failed to RUN_ARRAY /dev/md/2: Input/output error
    mdadm: failed to RUN_ARRAY /dev/md/2: Input/output error

     

    After running the mount and chroot commands then the vgscan and vgchange output is:

    # vgscan
      Reading all physical volumes.  This may take a while...
      No volume groups found

     

    # vgchange -a y
      No volume groups found

     

    The output of the last mount command using /dev/c/c is:

    # mount -o ro /dev/c/c /c
    mount: special device /dev/c/c does not exist

     

    Some interesting side notes:

     

     Running "mdadm --detail --scan" lists information for /dev/md/0 and /dev/md/1.  Output is below.

     

    # mdadm --detail --scan
    ARRAY /dev/md/0 metadata=1.2 name=001F33EAFC49:0 UUID=cb2a9985:1d9f0cff:5775d31f:ef046e9e
    ARRAY /dev/md/1 metadata=1.2 name=001F33EAFC49:1 UUID=22ca41b6:f04989b8:5e85faa3:cc9ad2e4

     

    Then running "mdadm --assemble --scan" gives similar information as I noted earlier that showed that md2 was rebuilding.  See output below.

     

    # mdadm --assemble --scan
    mdadm: /dev/md/001F33EAFC49:2 assembled from 5 drives and  1 rebuilding - not enough to start the array while not clean - consider --force.
    mdadm: No arrays found in config file or automatically

     

    Running "mdadm --examine on /dev/sd[a-f], which should be the devices for /dev/md2, shows that each one's state is "Active" and the array state is "AAAAAA".

     

    Short of using the "--force" option. which concerns me, with the command "mdadm --assemble /dev/md2 /dev/sda3 /dev/sdb3 /dev/sdc3 /dev/sdd3 /dev/sde3 /dev/sdf3" or the command "mdadm --assemble --scan", is there any other way to try to get the rebuild to complete?  Is there any problem with using the "--force" option?

     

    Based on my experience, I believe that the volume group name for md2 is "c" and the logical volume name is "c", also.  Am I correct?

     

    If I am unable to get the rebuild to complete, I'm thinking about acquiring a Terramaster D6-320 SATA enclosure and probably the Stellar Data Recovery Technician software to try to recover the data.

     

    Thanks again for your assistance.

    • StephenB's avatar
      StephenB
      Guru - Experienced User
      shaneswartz wrote:


      Short of using the "--force" option. which concerns me,

      You'll need to use --force (or perhaps even --really-force).

       

      Or RAID recovery software - requiring an enclosure, software, and storage to offload the data.

       

      R-Studio would be less expensive than Stellar Data Recovery Technician (at the moment, you can get the  R-Studio license you'd need for about $60 USD).  You can download prior to payment, so you can see what it would recover prior to purchase.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More