NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

Pro_s's avatar
Pro_s
Aspirant
Dec 03, 2017
Solved

Readynas Pro 6 - Err Used disks Check RAIDar

I have a ReadyNAS Pro Pioneer edition with six 2TB disks running OS6 that now displays "ERR: Used disks Check RAIDar" when booting.  

 

This problem occurred recently when i swith it on after standart shutdown (not power outage).

Is it possible to do something without factory reset?

I am not sure about OS reinstall on legacy models running OS6, but if it can help, i may try.

TNX!

 

  • If it were my system, I'd test the disks with vendor tools in a windows PC (seatools for seagate, lifeguard for western digital).  Replace any that fail (or which have more than 50 reallocated+pending sectors).

     

    Then do a factory reset with all disks in place.  Reconfigure the NAS and restore the data from the backup.

     

9 Replies

Replies have been turned off for this discussion
  • StephenB's avatar
    StephenB
    Guru - Experienced User

    You can use the OS reinstall from the boot menu.  I don't know if that will fix it, but it is worth a try.

    • Pro_s's avatar
      Pro_s
      Aspirant

      Thank you for answer.

      I have try to make OS Reinstall, device reboot and stuck at "Booting... Checking root FS" (more then 1 hour).

       

      • bedlam1's avatar
        bedlam1
        Prodigy

        I had a similar experience yesterday with my Pro 4.

        Powering down the NAS and using Reinstall OS from the Boot Menu again then worked successfully

  • telnet in Tech mode:

    login: root
    Password:                                                                       
    # start_raids -v                                                                
    Found new array data-0 [c4b4e8c3:c42179a2:fed11c6a:c2a4af1]                     
    Found new array 1 [6830d0fa:a6ede353:6e4678e:be7d83]                            
    Found new array 0 [f2fc4dc:133c8eb9:4f907524:fdab7081]                          
    Checking array 0                                                                
            uuid: f2fc4dc:133c8eb9:4f907524:fdab7081                                
            hostid: 33ea412f                                                        
            raid_disks: 6                                                           
            ctime: 1497865502                                                       
            layout: 0                                                               
            size: 8380416                                                           
            chunk_size: 4247053                                                     
    Checking array 1                                                                
            uuid: 6830d0fa:a6ede353:6e4678e:be7d83                                  
            hostid: 33ea412f                                                        
            raid_disks: 6                                                           
            ctime: 1497865503                                                       
            layout: 2                                                               
            size: 1047424                                                           
            chunk_size: 4247605                                                                       
    Checking array data-0                                                           
            uuid: c4b4e8c3:c42179a2:fed11c6a:c2a4af1                                
            hostid: 33ea412f                                                        
            raid_disks: 6                                                           
            ctime: 1497865529                                                       
            layout: 2                                                               
            size: 3897329664                                                        
            chunk_size: 4247677                                                     
    Setting array data-0 to run                                                     
    Run: /sbin/mdadm -A -R -f --auto=md /dev/md/0 /dev/sdf1 /dev/sde1 /dev/sdd1 /dev
    /sdc1 /dev/sdb1 /dev/sda1                                                       
    mdadm: /dev/md/0 has been started with 6 drives.                                
    Run: /sbin/mdadm -A -R -f --auto=md /dev/md/1 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev
    /sdc2 /dev/sdb2 /dev/sda2                                                       
    mdadm: /dev/md/1 has been started with 6 drives.                                
    Run: /sbin/mdadm -A -R -f --auto=md /dev/md/data-0 /dev/sdf3 /dev/sde3 /dev/sdd3
     /dev/sdc3 /dev/sdb3 /dev/sda3                                                  
    mdadm: /dev/md/data-0 has been started with 6 drives.                           
    Found 0 fstab name matches, and 0 fstab mountpt matches                         
    mount: mounting LABEL=33ea412f:data on /data failed: No such file or directory  
    /bin/sh: /sbin/btrfs: not found                                                 
    # btrfs check --repair /dev/md0                                                 
    enabling repair mode                                                            
    repair mode will force to clear out log tree, Are you sure? [y/N]: y    
    Checking filesystem on /dev/md0                                                 
    UUID: ba959f04-d0d7-4cf6-9c3b-56b945f4599c                                      
    checking extents                                                                
    ref mismatch on [1003638784 8192] extent item 1, found 0                        
    attempting to repair backref discrepency for bytenr 1003638784                  
    Ref doesn't match the record start and is compressed, please take a btrfs-image 
    of this file system and send it to a btrfs developer so they can complete this f
    unctionality for bytenr 434857640905023360                                      
    failed to repair damaged filesystem, aborting                                   
    # btrfs filesystem show /dev/md0                                                
    Label: '33ea412f:root'  uuid: ba959f04-d0d7-4cf6-9c3b-56b945f4599c              
            Total devices 1 FS bytes used 944.80MiB                                 
            devid    1 size 4.00GiB used 1.64GiB path /dev/md0                      
    # mount /dev/md0 /sysroot
    segmentation fault
    # btrfs filesystem show /dev/md127                                              
    Label: '33ea412f:data'  uuid: 436f1a9d-4eff-4db1-8b1a-c4da4f4230e6              
            Total devices 1 FS bytes used 4.80TiB                                   
            devid    1 size 9.07TiB used 5.46TiB path /dev/md127                    
    # mount /dev/md127 /mnt/                                                                                                                                        
    # ls -la /mnt                                                                   
    drwxrwxrwx    1       222 Oct  4 17:38 .                                        
    drwxr-xr-x   20       460 Dec  4 14:09 ..                                       
    drwxrwxrwx    1       158 Oct  4 17:38 ._share                                  
    drwxrwxr-x    1       230 Dec  4 12:57 .apps                                    
    drwxrwxrwx    1         0 Jul  9 13:06 .purge                                   
    drwxr-xr-x    1         0 Jun 19 09:56 .timemachine                             
    drwxrwxrwx    1         0 Jun 19 09:45 .vault                                   
    drwxrwxrwx    1      1006 Jul 10 19:46 camfc                                    
    drwxrwxrwx    1      1006 Jul 10 19:41 camfr                                    
    drwxr-xr-x    1        12 Jun 19 09:45 home                                     
    drwxrwxrwx    1       106 Jul 17 14:20 rsyslog                                  
    drwxrwxrwx    1      9434 Nov 28 20:24 ??????????                               
    drwxrwxrwx    1        78 Oct  2 10:27 ???????????                              
    drwxrwxrwx    1       660 Jul 20 19:20 ??????                                                                                                                  

    as far as i understand 4gb roos FS internal flash is broken (my NAS have about 6 years uptime). is it true? how to solve this problem? is it possible to boot from flash in USB port and use it for root FS?

     

    by the way, sometimes, my NAS starting ok, but frozen in 1-2mins. it is impossible to do something when NAS starts ok. 

     

    TNX for answers!

    • StephenB's avatar
      StephenB
      Guru - Experienced User

      Pro_s wrote:

      as far as i understand 4gb roos FS internal flash is broken (my NAS have about 6 years uptime). is it true?


      I don't think your flash is damaged - if it were, you'd likely have failed to boot into tech support mode.  It's more likely that the OS partition on the disks is damaged (along with the data volume's file system).  /dev/md0 is the OS partition on the disks.

       

      You can test this by installing a scratch disk in the NAS by itself, and attempt to do a factory reset from the boot menu.  If that works, then problem is with the disks, not the flash.

      • Pro_s's avatar
        Pro_s
        Aspirant
        StephenB wrote:

          /dev/md0 is the OS partition on the disks.

        exactly! you are right.

        what should I do further?

        I do a backup, of course. and... and what's next? my disks have 4-6 years uptime... I have to buy new disks?

        and why system corrupt so heavy? volume was RAID5. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More