NETGEAR is aware of a growing number of phone and online scams. To learn how to stay safe click here.
Forum Discussion
rengler
Apr 03, 2018Tutor
Excessive bandwidth using S3 backup feature
I have a ReadyNAS 316 (6.9.3) and have been using the built-in S3 backup utility for three of the shares on the NAS. These shares total 3TB of data, and all of that data is being successfully replica...
olicuk
Apr 12, 2018Guide
Just wondering did you get any further feedback on this issue?
I've got a ReadyNAS 104 (populated with 3x 4TB drives), which until yesterday was running v6.9.2 firmware, and is now at 6.9.3. I've been using the S3 Cloud feature since last December to back up a part of my NAS file system to S3... basically my photo share. Whilst I'd expect the S3 storage costs to go up over time, what I've seen alongside this is a significant escalation in data transfer "out" costs over the last 2 months.
The following shows the daily costs for data transfer out which have started being accrued:
The gap in the last couple of days is because the NAS was off. And at the start of the month will align with the free 15GB allowance, hence no excess charges for these days.
I've pulled the following summary together, showing:
- volume of data stored and file count, pulled from AWS weekly logs. Note the 495GB currently stored matches the volume of data reported by the NAS for the share being backed up, so I'm happy in that regard
- the last 5 columns are from the AWS invoices received (and pending for the current month). I've included the Put/Get request counts in case they're of use... certainly they've gone up compared with the previous two months and the Put (incl. Copy, Post, List) count looks related to data txfr out. But it's the 67GB data out in March, and 37.1GB so far in April that concerns me. As nothing should be accessing or downloading any of the data from this S3 bucket.
The S3 cloud settings are configured as follows:
Sync Direction: Upload local storage changes only
Upload Chunk Size (MB): 64
Storage Class: Standard - Infrequent Access
Upload Speed (KB/s): 200 (though I've just now changed it to 1024)
Download Speed (KB/s): 200
Server-Side Encryption: Yes
On the AWS side, my bucket is configured with default encryption of AES-256.
I've enabled some logging since yesterday in AWS, and of the couple of files I've looked at, there is a new Get request every 3-4 seconds being made, of the form:
REST.GET.BUCKET - "GET /?continuation-token=<token-str>&list-type=2&max-keys=1000&prefix=<folder>%2F HTTP/1.1" 200 - 261092 - 61 60 "-" "DORAYAKI/1.0" -
or just:
REST.GET.LOCATION - "GET /<my-bucket-name>?location HTTP/1.1" 200 - 137 - 3 - "-" "DORAYAKI/1.0" -
Any help would be much appreciated thank you.
OOM-9
Apr 16, 2018NETGEAR Expert
I checked with our Cloud service dev and it looks like he was able to get the information needed from Olicuk's extra details. There was some additional testing that we were checking for the calls/intervals in this case.
The calls are smaller in smaller datasets, but are significantly greater with larger data sets. When running a sync check every 30 seconds with data/capacity this size the return information is in the 100s of KB. We are looking into a configurable option so you are able to select the interval in the services that are more bandwidth aware.
Let me verify with the team a possible work around for the interval option. There are some cases where there is a setting that gets overwritten.
Related Content
NETGEAR Academy
Boost your skills with the Netgear Academy - Get trained, certified and stay ahead with the latest Netgear technology!
Join Us!