Compressing your data in AWS S3.

Why would you want to compress your data? Let's take a look at a few examples.

1. AWS Data Transfer Fees are $0.06-0.10/GB. If you have a 1TB training data file and you're transferring it to your compute layer -- Using SmashByte, you can compress it to 250GB.

Here you're missing out on $45-75 of savings/mo, takes just one API call.
2. AWS Data Storage Fees are $0.03/GB. If you have a 1TB training data file. With SmashByte you can store that data at-rest and stream it using the streaming API.

By not storing your data compressed you're missing out on $22.50 of savings/mo.

Here's how to get started...

Step 1. Identify large file(s) or folder(s) in your S3 interface.

For this example, we'll just compress one file as an example. The SmashByte API allows for multiple file keys. You can compress up to 1,000 files at a time.

Step 2. Copy the file key (large-file.json) into the API.

Pretty simple. Make sure you have IAM roles set up correctly with S3 Read and Write access. Email hello@smashbyte.com if you have any questions.

Step 3. See results.

Refresh your S3 interface, and depending on the size of your input files, you should start to see the new compressed files. Right now, I'm not deleting your source data, but you can archive it easily in the next step.

Interested in learning more? I'll shoot you an email with an API key and some ideas on how to get started saving money on AWS S3 spend.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.