Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
GlusterFS on the ReadyNAS HOWTO
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2014-02-15
12:38 PM
2014-02-15
12:38 PM
GlusterFS on the ReadyNAS HOWTO
The following document describes how to install Gluster on a Netgear ReadyNAS RN314. It mostly applies to the RN104 as well, but there might be some changes. In my casse, I have a RN314 as a master and a RN104 as a slave. Both have four 2TB HDDs, running in the Gluster equivalent of a JBOD array.
In order to do backups, I decided to RSYNC from the RN314 to the RN104 periodically. This suits my data storage requirements. Periodically means probably once a day. I have not yet managed to get Geo Replication working, so RSYNC is how I am doing things. I intend to improve my RSYNC script so that it WOL (Wakes On Lan) the RN104, copies any files, and then shuts it down.
This document assumes a few things:
If these assumptions are not right, take even more care!
The first step is to install a HDD in the RN. Do this in accordance with the manuals. This is important since the operating system is actually installed onto the HDD.
Once this is complete, log into the RN and perform the following steps:
From now on, things are going to get strange! The next step is to SSH into the machine. The account you need to SSH into is actually root, and uses the Admin password:
We need to install some software now
Create some mount points for the HDDs. These are my preferred locations
Also do a df to see what is mounted. You should see several instances of md127 mounted.
Then backup the md127 raid array
Now, we need to unmount md127.
It is needed three times since it is mounted in three locations. You may need to force the unmount, in the form of
You can then check that it is unmounted
Now we need to run the parted disk partition software. I must admit that I am not too familiar with this tool
Once we are in parted, we need to do the following. The actual commands will be added later:
Format the storage partition
Now, we need to create the md127 partition again
And format it
Now, we need to tell the RN what do do with this. So the following lines need to be added to /etc/fstab
Now mount them
And check to see if they have been mounted
Now, we need to recreate the /data partition, and get Apps working again
It might be a good idea to reboot at this point, just so that Apache does not get itself confused.
Once this is done, add the following line to /etc/fstab
Of course, change the mount point to whatever you think is logical.
This can then be mounted with the following command:
It is now time to get Gluster up and running.
The is the IP of the machine and the mount point of the disk. We then need to start Gluster
We can find the status with the command
We need to now mount this file system. We can do it by hand, or via fstab. Firstly, by hand
Note that the /homevol next to the IP is the name of the array, and not an actual filesystem location.
We can also mount it by adding it to the /etc/fstab
I am not 100% sure of this, but it works. This can be mounted with the following command
This creates an array with only a single disk.
You will probably also want to ensure that rcp.statd is started, and a few other things
The following is black magic.
Adding another disk
This next part works, but is horrible! Try it, but do not blame me if it does not work!
Insert the drive into the machine. You can find the name by typing the following command
Once you know the drive name, type the following. NOTE: This assumes that the existing drive is /dev/sda and the new one is /dev/sdb. Do this wrong, and you will need to read up on how to initialize your drive array from scratch!
This basically copies the first 11 GB from the first to the second HDD. This includes the partition table, and the RAID file system for apps and operating system.
Then format the big partition
Once this is complete, add an entry to /etc/fstab
Then remove the drive, wait a few seconds, and reinsert it. Then issue the following command:
Then we need to fix the RAID. CHECK THESE COMMANDS. In my case, I have set the md127 array up with two drives in the array, and two spares, with the two active drives being mirrored.
I am not sure if the ReadyNas wants you to do a similar thing for /dev/md0
You can then add the new disk to Gluster
You can see that this has taken with the following commands
/etc/fstab
My ultimate /etc/fstab on my RN314 is
Exports...
There are four options for getting the file system off the RN. They are
GlusterFS works really well, and is independent of the RN software. Many operating systems have clients which work well.
NFS works when you have decent computing power available, but I found the performance on the RN314 to be occasionally problematic on my Apple Mac. This is also independent of the RN portal, so make sure you DO NOT turn NFS on in the portal. You also need to make sure that rpc.statd is running, and possibly portmap
The GlusterFS options I am using are below. You will need to read up on Gluster to see how to add them.
Note that this exports subdirectories of the root automatically, so a command such as the following could work on the Mac. I am not sure that this is the best command line, but it seems to mostly work.
You can also export the directory using SMB. This needs to be done by hand. EXPAND
What I am doing on the Mac is actually to use AFP, and export the share that way. This seems to be the most reliable for me. I am still working out file permissions and the like.
Mounting my master Gluster on my slave Gluster
To clone the HDD from the RN314 to the RN104, I am mounting both Gluster File Systems on the backup RN, and then using RSYNC.
I have mounted the master as Read Only on the Backup. The following two entries are from the /etc/fstab
Then the command I am using to back up is
Later, when I am sure everything is working, I will add a 'delete' option, but I might have this as being initiated by me.
Copying off like this, I am generally getting about 7.5 MBytes/sec. Copying on to the 314, I am normally getting about 30-35 MBytes/sec, even if I am copying off the RN314
In order to do backups, I decided to RSYNC from the RN314 to the RN104 periodically. This suits my data storage requirements. Periodically means probably once a day. I have not yet managed to get Geo Replication working, so RSYNC is how I am doing things. I intend to improve my RSYNC script so that it WOL (Wakes On Lan) the RN104, copies any files, and then shuts it down.
This document assumes a few things:
- The gluster instance is called homevol
2TB HDDs
Gluster is being run without redundancy
If these assumptions are not right, take even more care!
The first step is to install a HDD in the RN. Do this in accordance with the manuals. This is important since the operating system is actually installed onto the HDD.
Once this is complete, log into the RN and perform the following steps:
- Under Overview|Device, Set the system time
Under System|Settings, enable SSH
Under System|Settings, configure Alerts
Configure your Network. You may wish to set an MTU of 9000 for improved performance
Change the Admin password
From now on, things are going to get strange! The next step is to SSH into the machine. The account you need to SSH into is actually root, and uses the Admin password:
ssh root@<IP-ADDRESS>
We need to install some software now
apt-get install joe # My preferred editor
apt-get install parted # fdisk replacement
apt-get install gluster??? # Gluster itself
Create some mount points for the HDDs. These are my preferred locations
mkdir /mnt
mkdir /mnt/disk001
mkdir /mnt/disk002
mkdir /mnt/disk003
mkdir /mnt/disk004
Also do a df to see what is mounted. You should see several instances of md127 mounted.
df -h
Then backup the md127 raid array
tar -cf /data.tar /data
Now, we need to unmount md127.
umount /dev/md127
umount /dev/md127
umount /dev/md127
It is needed three times since it is mounted in three locations. You may need to force the unmount, in the form of
umount -f /dev/md127
You can then check that it is unmounted
df -h
Now we need to run the parted disk partition software. I must admit that I am not too familiar with this tool
parted /dev/sda
Once we are in parted, we need to do the following. The actual commands will be added later:
rm 3 #remove partition 3
mkpart starting at 4833MB and ending at 9999 MB #This is for md127
set 3 raid on # Indicate that this is a RAID
mkpart starting at 10000MB ext4 ending at 2000GB # This is our master data
Format the storage partition
mkfs.ext4 /dev/sda4
Now, we need to create the md127 partition again
mdadm --create /dev/md127 /dev/sda3 --level=mirror --raid-devices =1 --force
And format it
mkfs.ext4 /dev/md127
Now, we need to tell the RN what do do with this. So the following lines need to be added to /etc/fstab
/dev/md127 /data ext4 defaults 0 0
/data/.apps /apps ext4 bind 0 0
/data/home /home ext4 bind 0 0
Now mount them
mount -a
And check to see if they have been mounted
df -h
Now, we need to recreate the /data partition, and get Apps working again
cd /
tar -xf /data.tar
It might be a good idea to reboot at this point, just so that Apache does not get itself confused.
Once this is done, add the following line to /etc/fstab
/dev/sda4 /mnt/disk001 ext4 defaults 0 0
Of course, change the mount point to whatever you think is logical.
This can then be mounted with the following command:
mount -a
It is now time to get Gluster up and running.
gluster volume create homevol 192.168.1.123:/mnt/disk001
The is the IP of the machine and the mount point of the disk. We then need to start Gluster
gluster volume start homevol
We can find the status with the command
gluster volume info
We need to now mount this file system. We can do it by hand, or via fstab. Firstly, by hand
mount.glusterfs 192.168.1.123:/homevol /mnt/homevol
Note that the /homevol next to the IP is the name of the array, and not an actual filesystem location.
We can also mount it by adding it to the /etc/fstab
192.168.1.42:/homevol /mnt/homevol glusterfs defaults,_netdev 0 2
I am not 100% sure of this, but it works. This can be mounted with the following command
mount -a
This creates an array with only a single disk.
You will probably also want to ensure that rcp.statd is started, and a few other things
The following is black magic.
Adding another disk
This next part works, but is horrible! Try it, but do not blame me if it does not work!
Insert the drive into the machine. You can find the name by typing the following command
dmesg
Once you know the drive name, type the following. NOTE: This assumes that the existing drive is /dev/sda and the new one is /dev/sdb. Do this wrong, and you will need to read up on how to initialize your drive array from scratch!
dd if=/dev/sda of=/dev/sdb bs=1M count=11000
This basically copies the first 11 GB from the first to the second HDD. This includes the partition table, and the RAID file system for apps and operating system.
Then format the big partition
mkfs.ext4 /dev/sdb4
Once this is complete, add an entry to /etc/fstab
/dev/sdb4 /mnt/disk002 ext4 defaults 0 0
Then remove the drive, wait a few seconds, and reinsert it. Then issue the following command:
mount -a
Then we need to fix the RAID. CHECK THESE COMMANDS. In my case, I have set the md127 array up with two drives in the array, and two spares, with the two active drives being mirrored.
mdadm --add /dev/md127 /dev/sdb3
mdadm --grow /dev/md127 --raid-devices=2
I am not sure if the ReadyNas wants you to do a similar thing for /dev/md0
You can then add the new disk to Gluster
gluster volume add-brick homevol 192.168.1.123:/mnt/disk002
You can see that this has taken with the following commands
gluster volume info
df -h
/etc/fstab
My ultimate /etc/fstab on my RN314 is
/dev/md127 /data ext4 defaults 0 0
/data/.apps /apps ext4 bind 0 0
/data/home /home ext4 bind 0 0
/dev/sda4 /mnt/disk006 ext4 defaults 0 0
/dev/sdb4 /mnt/disk010 ext4 defaults 0 0
/dev/sdc4 /mnt/disk017 ext4 defaults 0 0
/dev/sdd4 /mnt/disk018 ext4 defaults 0 0
192.168.1.42:/homevol /mnt/homevol glusterfs defaults,_netdev 0 2
Exports...
There are four options for getting the file system off the RN. They are
- GlusterFS
NFS
SMB
AFP
GlusterFS works really well, and is independent of the RN software. Many operating systems have clients which work well.
NFS works when you have decent computing power available, but I found the performance on the RN314 to be occasionally problematic on my Apple Mac. This is also independent of the RN portal, so make sure you DO NOT turn NFS on in the portal. You also need to make sure that rpc.statd is running, and possibly portmap
The GlusterFS options I am using are below. You will need to read up on Gluster to see how to add them.
nfs.ports-insecure: on
nfs.rpc-auth-allow: 192.168.1.92,192.168.1.*
cluster.min-free-disk: 1
Note that this exports subdirectories of the root automatically, so a command such as the following could work on the Mac. I am not sure that this is the best command line, but it seems to mostly work.
mkdir /Volumes/Media; sudo mount -t nfs -vv -o intr,tcp,nolock,vers=3 192.168.1.123:homevol/media/ /Volumes/Media/
You can also export the directory using SMB. This needs to be done by hand. EXPAND
What I am doing on the Mac is actually to use AFP, and export the share that way. This seems to be the most reliable for me. I am still working out file permissions and the like.
Mounting my master Gluster on my slave Gluster
To clone the HDD from the RN314 to the RN104, I am mounting both Gluster File Systems on the backup RN, and then using RSYNC.
I have mounted the master as Read Only on the Backup. The following two entries are from the /etc/fstab
192.168.1.124:/homevol /mnt/homevol glusterfs defaults 0 2
192.168.1.123:/homevol /mnt/newhomevol glusterfs ro 0 2
Then the command I am using to back up is
rsync -aPvhpu --modify-window 1 /mnt/newhomevol /mnt/homevol
Later, when I am sure everything is working, I will add a 'delete' option, but I might have this as being initiated by me.
Copying off like this, I am generally getting about 7.5 MBytes/sec. Copying on to the 314, I am normally getting about 30-35 MBytes/sec, even if I am copying off the RN314
Message 1 of 2
Labels:
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2014-05-06
01:33 AM
2014-05-06
01:33 AM
Re: GlusterFS on the ReadyNAS HOWTO
For various reasons I needed to upgrade my ReadyNAS machine. One of the reasons was upgrading the operating system to the latest version. To do this, I removed all my drives, put a blank one in, and installed the OS onto the new disk. Then I grabbed a copy of the /data directory, and cloned /dev/sda1 using the dd command, and used it to overwrite the root partition on one of my Gluster HDD's when inserted it (dd if=/dev/hda1 of=/dev/hdb1). Then removed the new HDD and booted on just the old drive with the new OS.
This led me to a couple of changes that needed to be made:
Edit the /etc/init.d/glusterfs-server file so that the last line of do_start() is
Also, edit /etc/init.d/mountall.sh so that glusterfs is added to the list of file types with gfs2, ceph and others
I actually had these on my system before, but unfortunately I neglected to document them.
Darryl
This led me to a couple of changes that needed to be made:
Edit the /etc/init.d/glusterfs-server file so that the last line of do_start() is
/etc/init.d/rpcbind start
Also, edit /etc/init.d/mountall.sh so that glusterfs is added to the list of file types with gfs2, ceph and others
I actually had these on my system before, but unfortunately I neglected to document them.
Darryl
Message 2 of 2