NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
pgjscottieuk
Mar 13, 2017Guide
Missing Storage Readynas 516
I am running my 516 nas with 6 x 4TB hard drives = 24TB in XRAID RAID 5 My NAS tells me i have 10.11TB Free Space Used Data 4.33TB i thought 24TB's in Raid 5 would give me 20TB space I started of w...
- Mar 15, 2017
After a Resync the volume has now expanded showing correct size 18.07TB instead of 14.43
Retired_Member
Mar 13, 2017After downloading the logfiles you might want to check btrfs.log to see how the partitions are truely layed out and you will see something like this
Label: 'xxx:data' uuid: yyy
Total devices 2 FS bytes used 7.59TiB
devid 1 size 5.44TiB used 3.70TiB path /dev/md127
devid 2 size 10.92TiB used 3.91TiB path /dev/md126
I have 4x6TB RN204, which in raid5 finally results into a volume of roughly 16.5TiB. Which is exactly that, what one would expect in this case.
- pgjscottieukMar 13, 2017Guide
How do i download the logfiles
- StephenBMar 13, 2017Guru - Experienced User
If you purchased new between 1 June 2014 and 31 May 2016, then you have lifetime (free) chat support. If you have it, then you should use it - see my.netgear.com
pgjscottieuk wrote:
How do i download the logfiles
There's a download control on the log page in the web ui. That will download the full log zipfile.
Your volume appears to be 4 TB short. There are three possibilities.
- You could be running dual redundancy
- The btrfs volume might not have expanded when the last disk was added (even if all disks are in the RAID array).
- One disk might not be part of the active RAID array (despite the status shown in your screenshot)..
In the last case, one of the disks might be marked as a "spare" in the logs. Try looking through btrfs.log, mdstat.log and volume.log.
- pgjscottieukMar 14, 2017Guide
Have looked through the files you have mentioned cant find anything suggesting a spare have attached thanks again for help
There did seem to be a spare at the end of the mdstat.log number 3 ?
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md127 : active raid5 sde3[0] sdc3[3](S) sda3[5] sdb3[4] sdd3[2] sdf3[1] 15608666624 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md1 : active raid6 sdc2[0] sda2[5] sdb2[4] sdf2[3] sde2[2] sdd2[1] 2093056 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md0 : active raid1 sde1[0] sda1[5] sdb1[4] sdc1[3] sdd1[2] sdf1[1] 4192192 blocks super 1.2 [6/6] [UUUUUU] unused devices: <none> /dev/md/0: Version : 1.2 Creation Time : Thu Feb 19 15:00:27 2015 Raid Level : raid1 Array Size : 4192192 (4.00 GiB 4.29 GB) Used Dev Size : 4192192 (4.00 GiB 4.29 GB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Mon Mar 13 21:05:10 2017 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Name : 7c6e3734:0 (local to host 7c6e3734) UUID : c1aecbdd:0f2a5130:71ba0480:07a324f5 Events : 1139 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 49 2 active sync /dev/sdd1 3 8 33 3 active sync /dev/sdc1 4 8 17 4 active sync /dev/sdb1 5 8 1 5 active sync /dev/sda1 /dev/md/1: Version : 1.2 Creation Time : Sat Sep 24 11:40:47 2016 Raid Level : raid6 Array Size : 2093056 (2044.00 MiB 2143.29 MB) Used Dev Size : 523264 (511.00 MiB 535.82 MB) Raid Devices : 6 Total Devices : 6 Persistence : Superblock is persistent Update Time : Thu Mar 9 17:35:17 2017 State : clean Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Name : 7c6e3734:1 (local to host 7c6e3734) UUID : 39ce8693:bda6cd12:2a906266:c669f6f5 Events : 19 Number Major Minor RaidDevice State 0 8 34 0 active sync /dev/sdc2 1 8 50 1 active sync /dev/sdd2 2 8 66 2 active sync /dev/sde2 3 8 82 3 active sync /dev/sdf2 4 8 18 4 active sync /dev/sdb2 5 8 2 5 active sync /dev/sda2 /dev/md/data-0: Version : 1.2 Creation Time : Thu Feb 19 15:00:27 2015 Raid Level : raid5 Array Size : 15608666624 (14885.58 GiB 15983.27 GB) Used Dev Size : 3902166656 (3721.40 GiB 3995.82 GB) Raid Devices : 5 Total Devices : 6 Persistence : Superblock is persistent Update Time : Mon Mar 13 21:04:44 2017 State : clean Active Devices : 5 Working Devices : 6 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Name : 7c6e3734:data-0 (local to host 7c6e3734) UUID : 2fb715d8:4a24e72b:57e364dc:cda58209 Events : 22927 Number Major Minor RaidDevice State 0 8 67 0 active sync /dev/sde3 1 8 83 1 active sync /dev/sdf3 2 8 51 2 active sync /dev/sdd3 4 8 19 3 active sync /dev/sdb3 5 8 3 4 active sync /dev/sda3 3 8 35 - spare /dev/sdc3
- Retired_MemberMar 13, 2017
On the admin webpage in the logfile tab you can download a zipfile containing 60+ logfiles amongst them btrfs.log.
- jak0lantashMar 13, 2017Mentor
Retired_Member
There is also an OS partition (4GB) and a swap partition (500MB I think) on every drive.
So, by order of importance:
1. RAID redundancy.
2. HDD manufacturer fantasy 1000 factor, implying difference between TB and TiB.
3. 4GB OS + 500MB Swap per HDD.
4. FS overhead.
I think you're right about the spare.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!