NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
hot swap
3 TopicsReadyNAS104 no data after hotswap
Hi, I had the Volume: Volume data health changed from Degraded to Dead. I hotswapped the Volume 1 today, with an 8TB drive and now the other 3 drives are showing red, and the new drive is black. (other drives are v2 - 2TB and V3&4 - 4TB) [v1 was 4 TB] It is like all the drives are now showing as new, I cannot see any data. and now the warning is Remove inactive volumes #2,3,4. I really hope this doesn't mean all the data is lost. If anyone can help it would be appreciated. Regards CraigReadyNAS Ultra 6 hot swap resync error
Hello, I am running a ReadyNAS Ultra 6 with RAIDiator 4.2.31. I have been running with all 6 disk bays full for years, and I had started to approach the capacity of the RAID array (91%). I had a new-ish spare 3TB WD Red hard drive kicking around, so I decided to hot swap it in place of an old 2TB WD Green hard drive in bay 6. When I did so, the NAS begin to restripe but quit after reporting that the HDD in Bay 1 was dead. I found this suspicious, because HDD 1 had been operating fine until I removed HDD 6. Anyway, I'm pretty sure that I next tried removing and re-installing HDD 6 a second time, and I received the same error. Then I tried installing the original 2TB HDD back into Bay 6, but I received an error saying that the "Disk attempted to add is too small." So I purchased a new 4TB drive and installed it in Bay 6 tonight, but I again received an error after some amount of resyncing saying that HDD1 was dead. Here is the full log: (Note that the last resync failure with the 4TB drive in Bay 6 should be listed at the top but isn't) --- Thu Oct 12 23:46:10 PDT 2017 System is up. Thu Oct 12 19:54:53 PDT 2017 RAID sync started on volume C. Thu Oct 12 19:53:50 PDT 2017 Data volume will be rebuilt with disk 6. Thu Oct 12 19:51:40 PDT 2017 New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 6] Thu Oct 12 19:49:08 PDT 2017 A disk was removed from the ReadyNAS. One or more RAID volumes are currently unprotected, and an additional disk failure or removal may result in data loss. Please add a replacement disk as soon as possible. Thu Oct 12 19:49:08 PDT 2017 Disk removal detected. [Disk 6] Thu Oct 12 10:09:38 PDT 2017 Disk attempted to add is too small. Please check the size of disk 6. Thu Oct 12 09:18:47 PDT 2017 System is up. Thu Oct 12 08:31:48 PDT 2017 If the failed disk is used in a RAID level 1, 5, or X-RAID volume, please note that volume is now unprotected, and an additional disk failure may render that volume dead. If this disk is a part of a RAID 6 volume, your volume is still protected if this is your first failure. A 2nd disk failure will make your volume unprotected. If this disk is a part of a RAID 10 volume, a failure of this disk's mirror partner will render the volume dead. It is recommended that you replace the failed disk as soon as possible to maintain optimal protection of your volume. Thu Oct 12 08:31:47 PDT 2017 Disk failure detected. Thu Oct 12 00:56:16 PDT 2017 RAID sync finished on volume C. The array is still in degraded mode, however. This can be caused by a disk sync failure or failed disks in a multi-parity disk array. Thu Oct 12 00:53:11 PDT 2017 System is up. Thu Oct 12 00:50:19 PDT 2017 Please close this browser session and use RAIDar to reconnect to the device. System rebooting... Thu Oct 12 00:43:50 PDT 2017 RAID sync finished on volume C. The array is still in degraded mode, however. This can be caused by a disk sync failure or failed disks in a multi-parity disk array. Thu Oct 12 00:39:47 PDT 2017 RAID sync started on volume C. Thu Oct 12 00:39:24 PDT 2017 Data volume will be rebuilt with disk 6. Thu Oct 12 00:37:13 PDT 2017 New disk detected. If multiple disks have been added, they will be processed one at a time. Please do not remove any added disk(s) during this time. [Disk 6] Thu Oct 12 00:34:33 PDT 2017 A disk was removed from the ReadyNAS. One or more RAID volumes are currently unprotected, and an additional disk failure or removal may result in data loss. Please add a replacement disk as soon as possible. Thu Oct 12 00:34:33 PDT 2017 Disk removal detected. [Disk 6] --- What's strange is that HDD1 seems to perform fine when I restart and there is no drive in HDD6. However, I do see that HDD1 has a non-zero Raw Read Error Rate (30) and a non-zero Current Pending Sector, whereas all of my other drives report zero for both. I don't know what to do, because in my current configuration I'm not redundant. But if I try to replace HDD1 without HDD6 present, won't I lose everything? I'm attaching a screenshot of my Status>Health page. Any advice is greatly appreciated!1.4KViews0likes6CommentsReadyNas 3200: was running 4.2.21, then installed 4.2.27 - (now can't add a new disk to volume)
I am running Raid 6, with 6 disks (1T each). The system was running 4.2.21, then we replaced (hot swap) one disk at a time with 3TB disks. The hot swap and rebuild was smooth, one at a time, each taking 3 hours. At the end, we tried to expand the volume size, but since snapshots was on, we had to delete snapshots. Once that was done, the expand volume would not work. Then we upgraded to 4.2.27. This still did not allow the volume to expand. Next we decided to see if we could re-do the hot swap again. We had extra 3T drives, so we started with one of the 6 drives. This did NOT work. The system said it would rebuild the volume with new disk 6 (the one we pulled out), but no progress ever showed on the front view, aside from the email that did say it would rebuild (as it it had normally happened when we ran 4.2.21). We waited 1 day, no progress. So we rebooted, and still nothing. Disk 6 shows as dead. We tried to replace disk 6 again with another new drive, and the email sent again, saying it would rebuilt the volume with disk 6, but no progess on the Frontview. The bottom line, no matter what we install in bay 6, to replace disk 6, it will not rebuild to add to the volume. This leaves the RAID 6 volume running with only 1 extra drive, since disk 6 will not work. I suspect that disks 1-5 were formatted with 4.2.21, and when we pulled disk 6 out (i.e., when we were trying to expand the volume size), disk 6 would not join the volume since by this time we were running 4.2.27. Even if we can't expand the volume to a larger size, my worry is that we are running raid 6, with one drive already lost. Technically, we could lose another disk. But, if I tried to rebuild the volume again, one disk at a time, my worry is that the same thing that happend to disk 6 will happen, and then the system would be running Raid 6 with 2 drives down. My thinking is that option one would be to rollback to 4.2.21, and see if we can get disk 6 to join the volume; regardless if we can expand the size or not. Can someone with more expert knowledge provide feedback? Thanks.2.9KViews0likes3Comments