Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
success in porting s3fs (Amazon S3 filesystem)
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2009-06-12
06:29 PM
2009-06-12
06:29 PM
success in porting s3fs (Amazon S3 filesystem)
I looked around the forums but didn't see anyone else having done so, so I spent a little time porting s3fs, the Google authored fuse filesystem for Amazon's S3 storage service, to my ReadyNAS Pro. Things seem to work and I wanted to share the results with the community.
Let me know if it works out.
Instructions (the patchfile is at the end):
1. Get the necessary packages to build
apt-get update
apt-get install build-essential
apt-get install libfuse-dev
apt-get install libssl-dev
apt-get install libcurl3-openssl-dev
apt-get install libxml2-dev
2. Download and unpack the s3fs source
% wget http://s3fs.googlecode.com/files/s3fs-r ... rce.tar.gz
% tar xzvf s3fs-r177-source.tar.gz
% cd s3fs
3. Edit the source 's3fs.cpp' using the patchfile enclosed with this set of
instructions
% patch s3fs.cpp <patchfile
4. Make the executable
% make
This should give you three compiler warnings (that you can ignore):
s3fs.cpp:440: warning: âze_t readCallback(void*, size_t, size_t, void*)âefined but not used
s3fs.cpp:1496: warning: âid* s3fs_init(fuse_conn_info*)âefined but not used
s3fs.cpp:1523: warning: ât s3fs_utimens(const char*, const timespec*)âefined but not used
5. Get the fuse runtime package:
% apt-get install fuse-utils
% apt-get install libfuse2
% apt-get install libcurl3-gnutls
6. Mount your file system:
% mkdir /s3
% ./s3fs <bucket> -o accessKeyId=<accessKeyId> -o secretAccessKey=<secretAccessKey> /s3
(replace '<bucket>', '<accessKeyId>' and '<secretAccessKey>' with your values)
copy everything below this line into a file named 'patchfile' to use in step 3 above.
Let me know if it works out.
Instructions (the patchfile is at the end):
1. Get the necessary packages to build
apt-get update
apt-get install build-essential
apt-get install libfuse-dev
apt-get install libssl-dev
apt-get install libcurl3-openssl-dev
apt-get install libxml2-dev
2. Download and unpack the s3fs source
% wget http://s3fs.googlecode.com/files/s3fs-r ... rce.tar.gz
% tar xzvf s3fs-r177-source.tar.gz
% cd s3fs
3. Edit the source 's3fs.cpp' using the patchfile enclosed with this set of
instructions
% patch s3fs.cpp <patchfile
4. Make the executable
% make
This should give you three compiler warnings (that you can ignore):
s3fs.cpp:440: warning: âze_t readCallback(void*, size_t, size_t, void*)âefined but not used
s3fs.cpp:1496: warning: âid* s3fs_init(fuse_conn_info*)âefined but not used
s3fs.cpp:1523: warning: ât s3fs_utimens(const char*, const timespec*)âefined but not used
5. Get the fuse runtime package:
% apt-get install fuse-utils
% apt-get install libfuse2
% apt-get install libcurl3-gnutls
6. Mount your file system:
% mkdir /s3
% ./s3fs <bucket> -o accessKeyId=<accessKeyId> -o secretAccessKey=<secretAccessKey> /s3
(replace '<bucket>', '<accessKeyId>' and '<secretAccessKey>' with your values)
copy everything below this line into a file named 'patchfile' to use in step 3 above.
294a295
> curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0);
635c636
< cout << "downloading[path=" << path << "][fd=" << fd << "]" << endl;
---
> //cout << "downloading[path=" << path << "][fd=" << fd << "]" << endl;
698c699
< cout << "copying[path=" << path << "]" << endl;
---
> //cout << "copying[path=" << path << "]" << endl;
760c761
< cout << "uploading[path=" << path << "][fd=" << fd << "][size="<<st.st_size <<"]" << endl;
---
> //cout << "uploading[path=" << path << "][fd=" << fd << "][size="<<st.st_size <<"]" << endl;
769c770
< cout << "getattr[path=" << path << "]" << endl;
---
> //cout << "getattr[path=" << path << "]" << endl;
844c845
< cout << "readlink[path=" << path << "]" << endl;
---
> //cout << "readlink[path=" << path << "]" << endl;
895c896
< cout << "mknod[path="<< path << "][mode=" << mode << "]" << endl;
---
> //cout << "mknod[path="<< path << "][mode=" << mode << "]" << endl;
928c929
< cout << "mkdir[path=" << path << "][mode=" << mode << "]" << endl;
---
> //cout << "mkdir[path=" << path << "][mode=" << mode << "]" << endl;
961c962
< cout << "unlink[path=" << path << "]" << endl;
---
> //cout << "unlink[path=" << path << "]" << endl;
985c986
< cout << "unlink[path=" << path << "]" << endl;
---
> //cout << "unlink[path=" << path << "]" << endl;
1009c1010
< cout << "symlink[from=" << from << "][to=" << to << "]" << endl;
---
> //cout << "symlink[from=" << from << "][to=" << to << "]" << endl;
1027c1028
< cout << "rename[from=" << from << "][to=" << to << "]" << endl;
---
> //cout << "rename[from=" << from << "][to=" << to << "]" << endl;
1045c1046
< cout << "link[from=" << from << "][to=" << to << "]" << endl;
---
> //cout << "link[from=" << from << "][to=" << to << "]" << endl;
1051c1052
< cout << "chmod[path=" << path << "][mode=" << mode << "]" << endl;
---
> //cout << "chmod[path=" << path << "][mode=" << mode << "]" << endl;
1065c1066
< cout << "chown[path=" << path << "]" << endl;
---
> //cout << "chown[path=" << path << "]" << endl;
1087c1088
< cout << "truncate[path=" << path << "][size=" << size << "]" << endl;
---
> //cout << "truncate[path=" << path << "][size=" << size << "]" << endl;
1106c1107
< cout << "open[path=" << path << "][flags=" << fi->flags << "]" << endl;
---
> //cout << "open[path=" << path << "][flags=" << fi->flags << "]" << endl;
1157c1158
< cout << "flush[path=" << path << "][fd=" << fd << "]" << endl;
---
> //cout << "flush[path=" << path << "][fd=" << fd << "]" << endl;
1172c1173
< cout << "release[path=" << path << "][fd=" << fd << "]" << endl;
---
> //cout << "release[path=" << path << "][fd=" << fd << "]" << endl;
1464c1465
< static void* s3fs_init(struct fuse_conn_info *conn) {
---
> static void* s3fs_init() {
1495a1497,1500
> static void* s3fs_init(struct fuse_conn_info *conn) {
> return s3fs_init();
> }
>
1520c1525
< cout << "utimens[path=" << path << "][mtime=" << str(ts[1].tv_sec) << "]" << endl;
---
> //cout << "utimens[path=" << path << "][mtime=" << str(ts[1].tv_sec) << "]" << endl;
1529a1535,1545
> s3fs_utime(const char *path, struct utimbuf* times) {
> //cout << "utimens[path=" << path << "][mtime=" << str(times->modtime) << "]" << endl;
> headers_t meta;
> VERIFY(get_headers(path, meta));
> meta["x-amz-meta-mtime"] = str(times->modtime);
> meta["x-amz-copy-source"] = urlEncode("/" + bucket + path);
> meta["x-amz-metadata-directive"] = "REPLACE";
> return put_headers(path, meta);
> }
>
> static int
1646c1662,1663
< s3fs_oper.utimens = s3fs_utimens;
---
> // LT s3fs_oper.utimens = s3fs_utimens;
> s3fs_oper.utime = s3fs_utime;
1648c1665,1666
< return fuse_main(custom_args.argc, custom_args.argv, &s3fs_oper, NULL);
---
> // LT return fuse_main(custom_args.argc, custom_args.argv, &s3fs_oper, NULL);
> return fuse_main(custom_args.argc, custom_args.argv, &s3fs_oper);
Message 1 of 8
Labels:
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2009-06-15
12:47 PM
2009-06-15
12:47 PM
Re: success in porting s3fs (Amazon S3 filesystem)
Wow!
I've been looking into an S3 solution for a while. 🙂
This looks very promising...
Anyone tried this yet?
Wish
I've been looking into an S3 solution for a while. 🙂
This looks very promising...
Anyone tried this yet?
Wish
Message 2 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2009-06-15
02:36 PM
2009-06-15
02:36 PM
Re: success in porting s3fs (Amazon S3 filesystem)
It should also be noted that the original author of s3fs has improved on the original open source project and is offering it as a commercial and supported product: http://www.subcloud.com/
Message 3 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2010-07-01
04:02 AM
2010-07-01
04:02 AM
Re: success in porting s3fs (Amazon S3 filesystem)
I'm looking into setting up this kind of S3 solution with our ReadyNAS Duo, and this looks great.
Has anyone else tried this solution, and if so, how has it worked for you?
thrane - does the solution still work (given you posted this a year ago)?
Has anyone else tried this solution, and if so, how has it worked for you?
thrane - does the solution still work (given you posted this a year ago)?
Message 4 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2010-07-01
06:50 AM
2010-07-01
06:50 AM
Re: success in porting s3fs (Amazon S3 filesystem)
duno if this will work for duo or nv+, they are sparc based cpu. The OP indicated he was using pro which is x86 based.
Message 5 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2010-07-20
04:04 PM
2010-07-20
04:04 PM
Re: success in porting s3fs (Amazon S3 filesystem)
anyone else make this work yet? looks very promising
Message 6 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2011-07-29
06:15 AM
2011-07-29
06:15 AM
Re: success in porting s3fs (Amazon S3 filesystem)
Any updates here? Using S3 for a backup solution is something I have been looking for, please continue the work!
dan
dan
Message 7 of 8
- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2011-08-01
12:19 PM
2011-08-01
12:19 PM
Re: success in porting s3fs (Amazon S3 filesystem)
First, sign up for a server account at JungleDisk.com to be able to download their software that is needed to use the functionality. It is $5/mo currently, plus storage fees amazon charges.
This tutorial is written for those of you who are not familiar with *nix systems, like me.
1. If you haven’t already, you must enable SSH root access on your ReadyNAS, see the netgear site if you aren’t sure how to do so.
2. Use Putty or your ssh client, and connect to the readynas
3. Username is root and password is the same as your readynas admin password
4. Downlaod package by typing:
wget https://downloads.jungledisk.com/jungledisk/junglediskserver-316-0.i386.deb --no-check-certificate
5. Install the package by typing:
sudo dpkg –i junglediskserver-316-0.i386.deb
6. Create the license file using VI editor (google VI editor if you are not a nix guru) by typing
vi /etc/jungledisk/junglediskserver-license.xml
The editor will appear next, past the following text (replacing the xxx with your actual license key)
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<LicenseConfig>
<licenseKey>XXXXXXXXXXXXX</licenseKey>
<proxyServer>
<enabled>0</enabled>
<proxyServer></proxyServer>
<userName></userName>
<password></password>
</proxyServer>
</LicenseConfig>
</configuration>
7. Next, press ESC to get back to command mode, type “:wq” and press enter, the file should now be saved and you will be at the root prompt.
8. Now let’s start the service by typing:
/etc/init.d/junglediskserver start
9. Next download the JungleDisk client portion of the software, and you should be good to go from there…
Hope this helps
This tutorial is written for those of you who are not familiar with *nix systems, like me.
1. If you haven’t already, you must enable SSH root access on your ReadyNAS, see the netgear site if you aren’t sure how to do so.
2. Use Putty or your ssh client, and connect to the readynas
3. Username is root and password is the same as your readynas admin password
4. Downlaod package by typing:
wget https://downloads.jungledisk.com/jungledisk/junglediskserver-316-0.i386.deb --no-check-certificate
5. Install the package by typing:
sudo dpkg –i junglediskserver-316-0.i386.deb
6. Create the license file using VI editor (google VI editor if you are not a nix guru) by typing
vi /etc/jungledisk/junglediskserver-license.xml
The editor will appear next, past the following text (replacing the xxx with your actual license key)
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<LicenseConfig>
<licenseKey>XXXXXXXXXXXXX</licenseKey>
<proxyServer>
<enabled>0</enabled>
<proxyServer></proxyServer>
<userName></userName>
<password></password>
</proxyServer>
</LicenseConfig>
</configuration>
7. Next, press ESC to get back to command mode, type “:wq” and press enter, the file should now be saved and you will be at the root prompt.
8. Now let’s start the service by typing:
/etc/init.d/junglediskserver start
9. Next download the JungleDisk client portion of the software, and you should be good to go from there…
Hope this helps
Message 8 of 8