NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.

Forum Discussion

midtskogen's avatar
midtskogen
Aspirant
Sep 27, 2012

Web interface doesn't see second drive, Linux does

I just bought a Duo V2 with two WD Green 3TB. However, when I boot up, the web interface claims that there is only one disk, 2.7TB. I'm running RAIDiator 5.3.6.

To investigate, I enabled ssh access and installed parted. Linux does recognise both disks and the root filesystem runs raid1 using both disks. Any ideas? What can I try next? Below is the output of parted, df and mdstat.

(parted) print all
Model: WDC WD30EZRX-00DC0B0 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 32.8kB 4295MB 4295MB raid
2 4295MB 4832MB 537MB raid
3 4832MB 3001GB 2996GB raid


Model: WDC WD30EZRX-00DC0B0 (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 32.8kB 4295MB 4295MB raid
2 4295MB 4832MB 537MB raid
3 4832MB 3001GB 2996GB raid


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/c-c: 2985GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number Start End Size File system Flags
1 0.00B 2985GB 2985GB ext4


Error: /dev/mapper/d-d: unrecognised disk label

Error: /dev/mtdblock0: unrecognised disk label

Error: /dev/mtdblock1: unrecognised disk label

Error: /dev/mtdblock2: unrecognised disk label

Error: /dev/mtdblock3: unrecognised disk label

Error: /dev/mtdblock4: unrecognised disk label

Model: Linux Software RAID Array (md)
Disk /dev/md0: 4294MB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number Start End Size File system Flags
1 0.00B 4294MB 4294MB ext3


Model: Linux Software RAID Array (md)
Disk /dev/md1: 537MB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number Start End Size File system Flags
1 0.00B 537MB 537MB linux-swap(v1)


Error: /dev/md2: unrecognised disk label

Error: /dev/md3: unrecognised disk label

# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md0 4184756 633840 3341256 16% /
tmpfs 16 0 16 0% /USB
/dev/c/c 2903461080 206516 2903254564 1% /c

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid0 sdb3[0]
2925544880 blocks super 1.2 16k chunks

md2 : active raid0 sda3[0]
2925544832 blocks super 1.2 64k chunks

md1 : active raid1 sda2[0] sdb2[2]
524276 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[2]
4193268 blocks super 1.2 [2/2] [UU]

unused devices: <none>

9 Replies

Replies have been turned off for this discussion
  • Midtskogen - When you refer to the web interface, are you talking about RAIDar, RAIDiator or the volume seen by your PC?
  • I'm referring to what I get when I direct my browser to the ip address of the NAS. Anyway, I tried the factory reset and that worked. I think the reason why I got into trouble was that I bought a demo used unit. When I ran RAIDar for the first time, I never got the question what kind of RAID to use. It probably already was in 1 disk RAID0. I still had to do the first time setup in the web interface, though, which made me believe that it already was fully factory reset.

    Thanks for the help!
  • Those are brand new disks, and the LCC readings are around 6400 so far. I don't know what their rating is.
  • My two WD Green Caviar 2TB disks broke the 300.000 LCCs (head parkings) marker in less than a year.
    Some say the LCC MTBF is around 300.000 for these disks... :(

    Green Caviar disks may not be ideal for NAS 24x365 use since they park their heads after just 8 seconds.

    I recently got both disks replaced on warranty, and since setting the IDLE3 setting to 254 (>1 hr) on the replacement units, the LCC counters does not move in thousands every day. I've configured my NAS to spin down the disks after 1 hour of idling.

    If I were you I'd definitely do something to prevent the disks from hitting supposed MTBF figures prematurely.


    /f
  • Thanks for sharing this. The power on hours were just 44 hours, and I hadn't yet written anything useful to the disks apart from formatting and letting the ReadyNAS set up its things, yet the disks were both 2% of the expected lifetime, if you're right. I downloaded an idl3ctl tool from sourceforge, and the setting was 80, which I changed to 254. But what will happen if I disable the idle3 timer?

    Apart from that the green WD drives are cheap (I prefer cheap disks, since expensive ones also fail randomly), these drives seemed to fit my intended use since I expect the disks to be idle much of the time. But I may have been fooled.
  • My disks were (past tense) also at 80 = 8 seconds before I set the value to 254.

    And yeah, before learning about the WD Green's parking behavior, all sounded nice: low amps, low temp, etc.

    The WD Red (http://www.wd.com/Red) were not available when I bought my disks a year ago, and the store where I just claimed warranty replacement didn't have them in their stock.

    I was close to cut a deal with a colleague also buying 2 TB disks (but not WD) to swap one drive with each other (I get one of his non-WDs and he gets one of my WDs) to attempt to mitigate a "bad stock" problem. But I chose to not follow that through.

    Its hard to tell at what LCC a disk will fail, but for sure - I'd prefer a low value before a high (if the effort does not hit some other MTBF factor negatively) and I can afford the extra milliamps spent by not parking heads (and inhibiting not used circuits) before spinning down one hour later.

    I spent some days looking around for a good delay setting to use with the idletool, and since some people wrote (here and there) that disabling the timer (substitute value 0) may have resulted in bad behavior/problems I decided to go for 254 which I learnt from a few others. Since 254 corresponds to a period larger than one hour, my NAS's idle disk spindown timeout will occur first resulting in parking and winding down the disks instead of just parking the heads.

    What the effects of totally disabling parking are hard to tell - some few more amp hours throughout the lifetime of the disks?
    Some people gets annoyed of the clicking sound that you can hear when the heads are moved to the parking track, this sound will probably also be gone with disabling.

    /f

NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology! 

Join Us!

ProSupport for Business

Comprehensive support plans for maximum network uptime and business peace of mind.

 

Learn More