NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
AMRivlin
Mar 20, 2013Apprentice
OS6 now works on x86 Legacy WARNING: NO NTGR SUPPORT!
Update: It is now unofficially possible using NTGR images to update legacy hardware to os6.X
See Post #3, for directions to install 6.2.1 on x86 Ultra and Pro Models. (ARM NOT SUPPORTED by this OS)
Be forewarned, this requires a SYSTEM WIPE and likely voids any warranty support from NTGR
Supported so far: pro 2/4/6, ultra 2/4/6, old pro / Pioneer Pro, 2100v2
Not Supported: NVX and 2100v1
Thanks go out to "HomeBrew Anonymous" for making this possible.
Update 2: A firmware image to downgrade back to 4.2.26 is now available. See this thread. While this downgrade should get you a working system again on the supported firmware, be forewarned this requires a SYSTEM WIPE and NetGear also does not provide support for this downgrade. If you have issues seek help on these forums.
Original Post/Gripes
I have been reading these forums since Monday's announcement and there has been a resounding "ooof" regarding the fact the Ultras and Pros are unsupported for future OS improvements.
To clear the air: it would appear Netgear will never support os6 on past hardware. I have almost come to grips with this, and at least they have been open and honest with their forward direction and aren't stringing us along. viewtopic.php?f=138&t=70131
The upside is our devices still work and are mostly stable and eventually we can upgrade to a new shell that has os6 support, but in the meantime our $500-1000 investment is unable to take advantage of modern features we all desire.
I don't think I can add a poll here at RN forums, but I would like to garner support for a 100% unsupported home brew of the os6 on Pro6 units.
If we get enough support perhaps a talented member(s) here would help release a homebrew of sorts.
The 3 main caveats are:
1. Netgear will never be held responsible/your warranty is void
2. A format is required (new FS and OS)
3. Data loss is highly possible
If you are still interested please post a reply to this thread.
mdgm and I have decided that its time to lock this thread. So please do post any new OS6 on Legacy issues on their own threads.
1,274 Replies
Replies have been turned off for this discussion
- mangroveApprenticexsnrg, thanks, you seem to have nailed it down -- this also fits perfectly with other problems we have seen.
There is only one solution, and Netgear should make it a priority to implement it: give users a choice of file systems for new volumes.
Edit: have put this as a feature request: viewtopic.php?f=18&t=73845 - MueRAspirantA few other performance tweaks I made to the FS.
In /etc/sysctl.conf, I added the following:vm.dirty_ratio = 3
vm.dirty_background_ratio = 2
vm.swappiness = 1
This reduces the amount of data that can accumulate before the FS starts committing it, reducing the amount of load.
WARNING:
Messing with these types of settings could seriously damage your system. - xsnrgAspirantA few comments. First, it would be nice to be able to select the filesystem, but even with btrfs there are other options that could be explored. There is a defrag mount option, which probably is not ideal for a nas, but there is the option, when a new sub is created (a share in readynas speak) to set an attribute on the sub +C. This disables operations of CoW, and would change some important things on how that sub operates. Netgear would have to call out what the difference is and how it affects things like snapshots, but it is about the only option I see to use btrfs for VM images. The attribute change/option would allow the rest of the NAS, if desired, to still use all the functions of btrfs.
Netgear should NOT default a new share to have snapshots on. This is a bad idea.
Finally, messing with the vm areas of the kernel should only be done if you are absolutely sure you know what you are doing. This is not saying you don't just a warning for others reading this. An example, vm.swappiness affects the ratio of used memory in the system to the amount, and starting place for swapping memory to disk. My NAS has it set to 0 which I think is a good setting. This causes the machine to always prefer memory to disk swap until memory is completely exhausted. Setting it higher may give you more disk cache, but as soon as you start swapping things out of system memory, you pay the price when it is needed again. In an OS made to be as light as possible, which I hope is the case, most of the things in memory are important, even if periodically, to the operation of the system. - MueRAspirantYou're right, I've added a warning. The default for swappiness is 50 by the way, so it was a significant improvement.
- xsnrgAspirant
MueR wrote: You're right, I've added a warning. The default for swappiness is 50 by the way, so it was a significant improvement.
Running 6.1.4? Maybe they are setting it dynamically based on amount of memory. That would make sense. My 4G machine is set to 0. I have not made any changes to it's config that I cannot make through frontview for warranty reasons. - mangroveApprentice
xsnrg wrote: A few comments. First, it would be nice to be able to select the filesystem, but even with btrfs there are other options that could be explored. There is a defrag mount option, which probably is not ideal for a nas, but there is the option, when a new sub is created (a share in readynas speak) to set an attribute on the sub +C. This disables operations of CoW, and would change some important things on how that sub operates.
The problem is that I'm seeing the same behaviors on my iSCSI containers -- and those files have +C set. Or is CoW still operating on a volume level even if the file has CoW disabled? In that case, a bug in BTRFS? - xsnrgAspirantYou have verified the containers are +C? Then you still have the problem of the filesystem mount not being noatime I would guess. What do the extents look like on one of the containers in use?
In my case, for the VM image it has nothing set:
# lsattr dwin5.img
---------------- dwin5.img
so I am still have issues happening given the file is changing all the time. The filesystem is also mounted with relatime, not noatime.
another interesting thing the whole of the filesystem is mounted under /data (the /c of yore) with its own mount options. Then sub mounts for the same metadevice are created for the shares with their own mount options.
/dev/md127 on /data type btrfs (rw,relatime,space_cache)
/dev/md127 on /run/nfs4/data/vm2 type btrfs (rw,relatime,space_cache)
Given the options go with the mount point, changing the latter mount for the share should be able to be done without affecting the rest of the system. I don't use iSCSI though. How does your differ? - mangroveApprenticeHm, I can't check right now, went back to OS4 to verify that performance can be good and I'm thinking of trying my hand at a full Debian instead if things with OS6 doesn't improve very fast. But yeah, I verified +C on the iSCSI container file.
Also I'm looking for an easily understandable description of the workings of BTRFS while writing data. I'm wondering if writing many small blocks means multiple read/write operations over the array, triggering multiple seek/rotational latencies. If that is the case, there should be a huge difference between RAID1 and RAID5. But that would be a more general problem with MDRAID and BTRFS in many scenarios... - StephenBGuru - Experienced UserI'm not sure if this is really a question about BTRFS or RAID.
If you are updating a sector on a RAID-1 array, the general case is that you need to read the sector, update the part that's changing, and then write it - you need to do that on both drives.
Of course the read/write operations are cached - but if the I/O is random, that will not help.
If you are updating a sector on a RAID-5 array, the I/O is the same in the general case. The data sector is read/updated/rewritten. The parity sector (on a different drive) is read, and updated, and written. The parity sector is updated by XORing with the original data, then XORing again with the changed data.
So in the general case of making a small update (less than one sector), the I/O for RAID-1 and RAID-5 are the same.
But there is a special case- when you are writing the entire sector on the application layer.
For that, RAID-1 simply writes the data to both drives - no reads are required.
RAID-5 still requires the general case - the data sector still needs to be read, because the parity sector needs to be XORed with it. Likewise, the parity sector needs to be read.
The impact on speed depends on performance of the disk caching. - mangroveApprenticeCertainly, but for spinning disks, there are seek and rotational latency penalties. Smart controllers tend to hide those, and in days of yore there was spindle sync to minimize the impact of this, but it can't be completely hidden. I guess my real question is if BTRFS does several reads/writes, for example for metadata updating.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!