Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
Volume expansion question
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2012-10-13
03:10 AM
2012-10-13
03:10 AM
Volume expansion question
We are planning to upgrade 6 out of our 12 x 1TB disks to 2TB. We're running our Readynas 4200 in XRaid2 mode.
What is the net disk capacity we're going to obtain, and are we going to experience some sort of problem with the 16 TB volume limitation?
Thanks
Ilker
What is the net disk capacity we're going to obtain, and are we going to experience some sort of problem with the 16 TB volume limitation?
Thanks
Ilker
Message 1 of 10
Labels:
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2012-10-13
03:36 AM
2012-10-13
03:36 AM
Re: Volume expansion question
You have either 10 or 11 TB now, depending on whether you are running single or dual redundancy.
After the upgrade you'd have either 14 or 15 TB (again depending on the redundancy you've chosen).
So you shouldn't hit the 16 TB limit.
However there is a second limit - which is that you can't expand more than 8 TB from your starting point. In order to know if that limit applies you'd need to know the number of disks in your initial install (or the most recent factory default)
After the upgrade you'd have either 14 or 15 TB (again depending on the redundancy you've chosen).
So you shouldn't hit the 16 TB limit.
However there is a second limit - which is that you can't expand more than 8 TB from your starting point. In order to know if that limit applies you'd need to know the number of disks in your initial install (or the most recent factory default)
Message 2 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2012-10-13
03:38 AM
2012-10-13
03:38 AM
Re: Volume expansion question
Take a look at the XRAID Volume Size Calculator (see link in my sig).
Most likely your volume would be configured using X-RAID2 dual-redundancy (uses RAID-6). Even if not (using single-redundancy certainly wouldn't be recommended with this config) you still wouldn't hit that 16TB limit.
Also note there's an 8TB limit for online volume expansion over the life of the volume. Did you start out with all 12x1TB disks installed? If so you wouldn't hit that limit either.
Edit: beaten.
Most likely your volume would be configured using X-RAID2 dual-redundancy (uses RAID-6). Even if not (using single-redundancy certainly wouldn't be recommended with this config) you still wouldn't hit that 16TB limit.
Also note there's an 8TB limit for online volume expansion over the life of the volume. Did you start out with all 12x1TB disks installed? If so you wouldn't hit that limit either.
Edit: beaten.
Message 3 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2012-10-13
04:13 AM
2012-10-13
04:13 AM
Re: Volume expansion question
oops- the correct answer for single redundancy is 16 TB. This doesn't change the overall response though - you still won't exceed the 16 TB threshold, but might exceed the 8 TB growth limit.
StephenB wrote: After the upgrade you'd have either 14 or 15 TB (again depending on the redundancy you've chosen).
Note that these are approximate, and in 1000 byte units, not the 1024 byte units used by frontview and most PCs. The calculator will be more accurate.
Message 4 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2012-12-10
07:20 PM
2012-12-10
07:20 PM
Re: Volume expansion question
Is there any way to tell what the original volume size was? I am not certain which drives I put in originally. I'm concerned that if I replace too many more drives with larger ones I'll end up over the 8TB expansion limit.
Message 5 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2013-04-10
08:35 PM
2013-04-10
08:35 PM
Re: Volume expansion question
I've been waiting 4 months for a reply to my question. Can anyone help?
Message 6 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2013-04-11
04:38 PM
2013-04-11
04:38 PM
Re: Volume expansion question
There is nothing in the logs that restates what the original size of the volume was when you first factory defaulted. I usually keep track of the sizes in a notepad or something and save it for later reference. There are rotated logs, but they dont go back far enough as the OS only has 4 GB of space so as to maximize the size of the disks. Anyhow if you did start with 12 disks 1 tb each. In ssh mode - becareful not to delete anything - go into /var/log and /var/log/frontview and check out the files, the dated files especially. I doubt they go back far enough to your factory default date, your factory default date is recorded in initrd.log in /var/log/frontview.
Message 7 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2013-04-11
04:45 PM
2013-04-11
04:45 PM
Re: Volume expansion question
I poked around on this myself on my Pro-6, and didn't see anything obvious. Somehow the system knows (otherwise expansion wouldn't fail when the limit was reached), but I didn't see any clear indications when I looked at the file system details.
Message 8 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2013-04-11
05:18 PM
2013-04-11
05:18 PM
Re: Volume expansion question
StephenB wrote: I poked around on this myself on my Pro-6, and didn't see anything obvious. Somehow the system knows (otherwise expansion wouldn't fail when the limit was reached), but I didn't see any clear indications when I looked at the file system details.
Yes, this was my thought too. There has to be something other than the logs to record the original size. It can't just fail from "The Great Mysterious Curse of Expansion", an insidious rite performed upon initialisation, one that only the Gods are privy to!
Message 9 of 10
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2013-07-09
07:21 AM
2013-07-09
07:21 AM
Re: Volume expansion question
I found the logs as part of the download logs, 'expansion.log'. It showed I had 4 x 1.5TB drives installed on initialisation so, in theory, I should be able to expand to 12.5TB. However, I have just replaced the 2nd to last of the 1.5TB drives so that I have 5 x 3TB and I'm now hitting a "Volume Expansion Failed" error.
It is showing an error in the expansion.log
Frontview is also showing something weird, the capacity says "9000 GB (81%) of 10 TB used" on the volume settings section. 81% of 10TB is not 9GB.
Any suggestions of where to start?
It is showing an error in the expansion.log
[2013/07/09 20:56:01 2114] Boot, handle mutiple expand_md
[2013/07/09 20:56:02 2115] STAGE_CHECK: saved my_pid 2115 in check mode.
[2013/07/09 20:56:02 2115] RAID MODE: 1, sn=
[2013/07/09 20:56:03 2115] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat
[2013/07/09 20:56:03 2115] Current file system status: ext4
[2013/07/09 20:56:03 2115] LINE 5229: exec command: rm -fr /var/log/frontview/.V_E.*
[2013/07/09 20:56:05 2115] X_level: 5
[2013/07/09 20:56:05 2115] /usr/sbin/expand_md
[2013/07/09 20:56:05 2115] MD degraded 0, bg_job 0, boot 1
[2013/07/09 20:56:05 2115] Read 1957072 byte from configuration
[2013/07/09 20:56:05 2115] Disk configuration matching with online configuration.
[2013/07/09 20:56:05 2115] Should do expand now...
[2013/07/09 20:56:05 2115] ===== Partition Entry (MD not used) =====
[2013/07/09 20:56:05 2115] ===== Partition Entry (Used by MD) =====
[2013/07/09 20:56:05 2115] 000 /dev/sdd,9VS3FFFW,8, 48, 1465138584 1465136504 1465136504 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 3 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sdd1, 8, 49, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sdd2, 8, 50, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sdd3, 8, 51, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 001 /dev/sdb,WD-WMC1T2385134,8, 16, 2930266584 2930264483 2930264483 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 4 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sdb1, 8, 17, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sdb2, 8, 18, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sdb3, 8, 19, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 003 /dev/sdb4, 8, 20, 1465127979 MD_FULL md=/dev/md3
[2013/07/09 20:56:05 2115] 002 /dev/sdc,WD-WCC1T0723872,8, 32, 2930266584 2930264483 2930264483 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 4 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sdc1, 8, 33, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sdc2, 8, 34, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sdc3, 8, 35, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 003 /dev/sdc4, 8, 36, 1465127979 MD_FULL md=/dev/md3
[2013/07/09 20:56:05 2115] 003 /dev/sda,WD-WMC1T0095260,8, 0, 2930266584 2930264483 2930264483 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 4 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sda1, 8, 1, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sda2, 8, 2, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sda3, 8, 3, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 003 /dev/sda4, 8, 4, 1465127979 MD_FULL md=/dev/md3
[2013/07/09 20:56:05 2115] 004 /dev/sde,MK0331YHGTL99A,8, 64, 2930266584 2930264483 2930264483 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 4 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sde1, 8, 65, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sde2, 8, 66, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sde3, 8, 67, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 003 /dev/sde4, 8, 68, 1465127979 MD_FULL md=/dev/md3
[2013/07/09 20:56:05 2115] 005 /dev/sdf,MK0331YHGXPSGA,8, 80, 2930266584 2930264483 2930264483 MD_FULL 0
[2013/07/09 20:56:05 2115] partitions: 4 property: 4K
[2013/07/09 20:56:05 2115] 000 /dev/sdf1, 8, 81, 4194304 MD_FULL md=/dev/md0
[2013/07/09 20:56:05 2115] 001 /dev/sdf2, 8, 82, 524288 MD_FULL md=/dev/md1
[2013/07/09 20:56:05 2115] 002 /dev/sdf3, 8, 83, 1460417912 MD_FULL md=/dev/md2
[2013/07/09 20:56:05 2115] 003 /dev/sdf4, 8, 84, 1465127979 MD_FULL md=/dev/md3
[2013/07/09 20:56:05 2115] Found leftover expansion actions DOING_F_RESIZE
, continue...
sh: -c: line 5: unexpected EOF while looking for matching `"'
sh: -c: line 7: syntax error: unexpected end of file
[2013/07/09 20:56:05 2115] LINE 448: exec command: echo "DOING_F_RESIZE" > /.os_V_E_continue
[2013/07/09 20:56:07 2115] LINE 4249: exec command: resize2fs -pf /dev/c/c
resize2fs 1.42.7 (21-Jan-2013)
resize2fs: Not enough reserved gdt blocks for resizing
Filesystem at /dev/c/c is mounted on /c; on-line resizing required
old_desc_blocks = 1395, new_desc_blocks = 1570
[2013/07/09 20:56:08 2115] Expand second phase error 10: resize2fs, err=0x100
sh: -c: line 5: unexpected EOF while looking for matching `"'
sh: -c: line 7: syntax error: unexpected end of file
[2013/07/09 20:56:08 2115] File system expand failed
[2013/07/09 20:56:08 2115] LINE 4852: exec command: /usr/sbin/expand_md -a super >> /var/log/frontview/expand_md.log 2>&1 &
[2013/07/09 20:56:08 2547] Boot, handle mutiple expand_md
[2013/07/09 20:56:08 2115] LINE 4855: exec command: /frontview/bin/volumescan &
[2013/07/09 20:56:08 2115] STAGE_WIPE: Clean my_pid 2115
[2013/07/09 20:56:09 2549] STAGE_CHECK: saved my_pid 2549 in check mode.
[2013/07/09 20:56:09 2549] RAID MODE: 1, sn=
[2013/07/09 20:56:09 2549] LINE 4591: exec command: lvs > /var/log/frontview/.V_E.snapshotstat
[2013/07/09 20:56:10 2549] Current file system status: ext4
[2013/07/09 20:56:10 2549] LINE 5229: exec command: rm -fr /var/log/frontview/.V_E.*
[2013/07/09 20:56:10 2549] X_level: 5
[2013/07/09 20:56:10 2549] /usr/sbin/expand_md -a super
[2013/07/09 20:56:10 2549] MD degraded 0, bg_job 0, boot 1
[2013/07/09 20:56:10 2549] ++++Write 1957072 byte to configuration++++
[2013/07/09 20:56:10 2549] /var/log/frontview/.known_cfgdir/9VS3FFFW
[2013/07/09 20:56:10 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:10 2549] get disk /dev/sdd format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:10 2549] LINE 7839: exec command: sgdisk -p /dev/sdd | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/9VS3FFFW
[2013/07/09 20:56:11 2549] /var/log/frontview/.known_cfgdir/WD-WMC1T2385134
[2013/07/09 20:56:11 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:11 2549] get disk /dev/sdb format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:11 2549] LINE 7839: exec command: sgdisk -p /dev/sdb | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/WD-WMC1T2385134
[2013/07/09 20:56:11 2549] /var/log/frontview/.known_cfgdir/WD-WCC1T0723872
[2013/07/09 20:56:11 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:11 2549] get disk /dev/sdc format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:11 2549] LINE 7839: exec command: sgdisk -p /dev/sdc | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/WD-WCC1T0723872
[2013/07/09 20:56:12 2549] /var/log/frontview/.known_cfgdir/WD-WMC1T0095260
[2013/07/09 20:56:12 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:12 2549] get disk /dev/sda format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:12 2549] LINE 7839: exec command: sgdisk -p /dev/sda | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/WD-WMC1T0095260
[2013/07/09 20:56:12 2549] /var/log/frontview/.known_cfgdir/MK0331YHGTL99A
[2013/07/09 20:56:12 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:12 2549] get disk /dev/sde format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:12 2549] LINE 7839: exec command: sgdisk -p /dev/sde | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/MK0331YHGTL99A
[2013/07/09 20:56:13 2549] /var/log/frontview/.known_cfgdir/MK0331YHGXPSGA
[2013/07/09 20:56:13 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:13 2549] get disk /dev/sdf format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:13 2549] LINE 7839: exec command: sgdisk -p /dev/sdf | grep '[0-9] ' > /var/log/frontview/.known_cfgdir/MK0331YHGXPSGA
[2013/07/09 20:56:13 2549] LINE 4939: exec command: ps -ef | grep -v expand_md > /var/log/frontview/.V_E.snapshotstat
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sdd format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sdd used/total/partitioned = 1465136504/1465138584/1465136504, 60000000
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sdb format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sdb used/total/partitioned = 2930264483/2930266584/2930264483, 60000000
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sdc format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sdc used/total/partitioned = 2930264483/2930266584/2930264483, 60000000
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sda format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sda used/total/partitioned = 2930264483/2930266584/2930264483, 60000000
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sde format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sde used/total/partitioned = 2930264483/2930266584/2930264483, 60000000
[2013/07/09 20:56:14 2549] gpt sig:0,mbr sig:0,fake type:0
[2013/07/09 20:56:14 2549] get disk /dev/sdf format is (GPT=2,MBR=1,MX=3,MISC=-1): 2
[2013/07/09 20:56:14 2549] /dev/sdf used/total/partitioned = 2930264483/2930266584/2930264483, 60000000
[2013/07/09 20:56:14 2549] Not enough disk for a new array, drives with free space: 0, X_level:5
[2013/07/09 20:56:14 2549] STAGE_WIPE: Clean my_pid 2549
Frontview is also showing something weird, the capacity says "9000 GB (81%) of 10 TB used" on the volume settings section. 81% of 10TB is not 9GB.
Any suggestions of where to start?
Message 10 of 10