NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
rengler
Apr 03, 2018Tutor
Excessive bandwidth using S3 backup feature
I have a ReadyNAS 316 (6.9.3) and have been using the built-in S3 backup utility for three of the shares on the NAS. These shares total 3TB of data, and all of that data is being successfully replicated up to S3 (one-way up only). I only change about 5GB each month by adding photos to one of the shares.
I've noticed from my Amazon backups that it looks like the NAS is transferring 3TB of bandwidth each month, drastically raising the cost of using this service (e.g. storage is $10 or so, and bandwidth alone is $50). I don't think the tool should be uploading everything each month, right?
Am I missing something in how this tool works? I'd expect to see bandwidth being used but only when I add new files and just for the amount of data in those files.
Thanks for any help or advice...
17 Replies
Replies have been turned off for this discussion
- OOM-9NETGEAR Expert
The tool is configured to run like a syncing service and runs a check every 30 seconds. There were some checks that we are running to make sure that we only backup the difference. Those checks should not take so much bandwidth as you were mentioning.
We would like to take a look at your unit to see if there are any anomalies in your S3 sync. Can you enable Secure Diagnostics Mode and send me a message of the 5-digit number?
- olicukGuide
Just wondering did you get any further feedback on this issue?
I've got a ReadyNAS 104 (populated with 3x 4TB drives), which until yesterday was running v6.9.2 firmware, and is now at 6.9.3. I've been using the S3 Cloud feature since last December to back up a part of my NAS file system to S3... basically my photo share. Whilst I'd expect the S3 storage costs to go up over time, what I've seen alongside this is a significant escalation in data transfer "out" costs over the last 2 months.
The following shows the daily costs for data transfer out which have started being accrued:
The gap in the last couple of days is because the NAS was off. And at the start of the month will align with the free 15GB allowance, hence no excess charges for these days.
I've pulled the following summary together, showing:
- volume of data stored and file count, pulled from AWS weekly logs. Note the 495GB currently stored matches the volume of data reported by the NAS for the share being backed up, so I'm happy in that regard
- the last 5 columns are from the AWS invoices received (and pending for the current month). I've included the Put/Get request counts in case they're of use... certainly they've gone up compared with the previous two months and the Put (incl. Copy, Post, List) count looks related to data txfr out. But it's the 67GB data out in March, and 37.1GB so far in April that concerns me. As nothing should be accessing or downloading any of the data from this S3 bucket.
The S3 cloud settings are configured as follows:
Sync Direction: Upload local storage changes only
Upload Chunk Size (MB): 64
Storage Class: Standard - Infrequent Access
Upload Speed (KB/s): 200 (though I've just now changed it to 1024)
Download Speed (KB/s): 200
Server-Side Encryption: Yes
On the AWS side, my bucket is configured with default encryption of AES-256.
I've enabled some logging since yesterday in AWS, and of the couple of files I've looked at, there is a new Get request every 3-4 seconds being made, of the form:
REST.GET.BUCKET - "GET /?continuation-token=<token-str>&list-type=2&max-keys=1000&prefix=<folder>%2F HTTP/1.1" 200 - 261092 - 61 60 "-" "DORAYAKI/1.0" -
or just:
REST.GET.LOCATION - "GET /<my-bucket-name>?location HTTP/1.1" 200 - 137 - 3 - "-" "DORAYAKI/1.0" -
Any help would be much appreciated thank you.
- OOM-9NETGEAR Expert
I checked with our Cloud service dev and it looks like he was able to get the information needed from Olicuk's extra details. There was some additional testing that we were checking for the calls/intervals in this case.
The calls are smaller in smaller datasets, but are significantly greater with larger data sets. When running a sync check every 30 seconds with data/capacity this size the return information is in the 100s of KB. We are looking into a configurable option so you are able to select the interval in the services that are more bandwidth aware.
Let me verify with the team a possible work around for the interval option. There are some cases where there is a setting that gets overwritten.
- olicukGuide
Thank you OOM-9 . So the NAS does a sync check every 30 secs, that's some workload? I guess the 70GB/mo equates to about 2.4GB/day or 100Mb/hour... or 1.65Mb/min.... so that's potentially two checks returning 800Kb each? For my needs a sync check every 30 mins, or even 2 hours would suffice, and even once per 24h would be ok. It would reduce bandwidth and request costs to almost certainly stay within the free tier..... unless I ever need to pull down the data archived. Though I also have a concern over what happens if I increase my S3 storage usage from the current 500GB, to archive all 3.6TB of data on my NAS. What would happen to the calls then as it sounds like they almost increase in size exponentially?
This has got me looking far more at the full costs of cloud storage and the various options on the ReadyNAS, about which I'll create a separate thread.
Related Content
NETGEAR Academy

Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!