Botocore is awful, so I wrote a better Python client for AWS S3

Kyle Mistele - Dec 22 '20 - - Dev Community

If you've ever been unfortunate enough to have had to work with botocore, Amazon Web Services' Python API, you know that it's awful. There are dozens of ways to accomplish any given task, and the differences between each are unclear at best. I recently found myself working with botocore while trying to build some S3 functionality into codelighthouse, and I got really frustrated with it, really quickly.

AWS S3 (Simple Storage Service) is not complicated - it's object storage. You can GET, PUT, DELETE, and COPY objects, with a few other functionalities. Simple, right? Yet for some reason, if you were to print botocore's documentation for the S3 service, you'd come out to about 525 printed pages.

I chose to use to the Object API, which is the highest-level API provided by the S3 resource in botocore, and it was still a headache. For example, the Object API doesn't throw different types of exceptions - it throws one type of exception, which has numerous properties that you have to programatically analyze to determine what actually went wrong.

There are a few open-source packages out there already, but I found that most of them left a lot to be desired - some had you writing XML, and others were just more complicated than they needed to be.

To save myself from going mad trying to decipher the docs, I wrote a custom high-level driver that consumes that low-level botocore API to perform most basic botocore functionalities.

4r8zoa

To save other developers from the same fate I narrowly avoided, I open-sourced the code and published it on PyPi so you can easily use it in all of your projects.

Let's Get Started

Installing my custom AWS S3 Client

Since my client code is hosted via PyPi, it's super easy to install:

pip install s3-bucket
Enter fullscreen mode Exit fullscreen mode

Configuring the S3 Client

To access your S3 buckets, you're going to need an AWS secret access key ID, and the AWS secret access key. I wrote a method that you can pass these to in order to configure the client so that you can use your buckets. I strongly suggest not hard-coding these values in your code, since doing so can create security vulnerabilities and is bad practice. Instead, I recommend storing them in environment variables and using the os module to fetch them:

import s3_bucket as S3
import os

# get your key data from environment variables
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')

# initialize the package
S3.Bucket.prepare(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
Enter fullscreen mode Exit fullscreen mode

Using the S3 Client

I designed the S3 Client's API to be logically similar to how AWS structures S3 buckets. Instead of messing around with botocore's Client, Resource, Session, and Object APIs, there is one, simple API: the Bucket API.

The Bucket API

The Bucket API is simple and provides most of the basic methods you'd want to use for an S3 bucket. Once you've initialized the S3 client with the keys as described in the previous section, you can initialize a Bucket object by passing it a bucket name:

bucket = S3.Bucket('your bucket name')

#example
bucket = S3.Bucket('my-website-data')
Enter fullscreen mode Exit fullscreen mode

Once you've done that, it's smooth sailing - you can use any of the following methods:

Method Description
bucket.get(key) returns a two-tuple containing the bytes of the object and a Dict containing the object's metadata
bucket.put(key, data, metadata=metadata) upload data as an object with key as the object's key. data can be either a str type or a bytes type. metadata is an optional argument that should be a Dict containing metadata to store with the object.
bucket.delete(key) delete the object in the bucket specified by key
bucket.upload_file(local_filepath, key) Upload the file specified by local_filepath to the bucket with key as the object's key.
bucket.download_file(key, local_filepath) Download the object specified by key from the bucket and store it in the local file local_filepath.

Custom Exceptions

As I mentioned earlier, the way that botocore raises exceptions is somewhat arcane. Instead of raising different types of exceptions to indicate different types of problems, it throws one type of exception that contains properties that you must use to determine what went wrong. It's really obtuse, and a bad design pattern.

Instead of relying on your client code to decipher botocore's exceptions, I wrote custom exception classes that you can use to handle most common types of S3 errors.

Exception Cause Properties
BucketException The super class for all other Bucket exceptions. Can be used to generically catch exceptions raised by the API. bucket, message
NoSuchBucket Raised if you try to access a bucket that does not exist. bucket, key, message
NoSuchKey Raised if you try to access an object that does not exist within an existing bucket. bucket, key, message
BucketAccessDenied AWS denied access to the bucket you tried to access. It may not exist, or you may not have permission to access it. bucket, message
UnknownBucketException Botocore threw an exception which this client was not programmed to handle. bucket, error_code, error_message

To use these exceptions, you can do the following:

try:
    bucket = S3.Bucket('my-bucket-name') 
    data, metadata = bucket.get('some key')
except S3.Exceptions.NoSuchBucket as e:
    # some error handling here
    pass
Enter fullscreen mode Exit fullscreen mode

Examples

Below I've provided an example of a couple of use cases for the S3 client.

Uploading and downloading files

This example shows how to upload and download files to/from your S3 bucket

import s3_bucket as S3
import os

# get your key data from environment variables
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')

# initialize the package
S3.Bucket.prepare(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

# initialize a bucket
my_bucket = S3.Bucket('my-bucket')

# UPLOAD A FILE
my_bucket.upload_file('/tmp/file_to_upload.txt', 'myfile.txt')
my_bucket.download_file('myfile.txt', '/tmp/destination_filename.txt')
Enter fullscreen mode Exit fullscreen mode

Storing and retrieving large blobs of text

The reason that I originally built this client was to handle storing and retrieving large blobs of JSON data that were way to big to store in my database. The below example shows you how to do that.

import s3_bucket as S3
import os

# get your key data from environment variables
AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')

# initialize the package
S3.Bucket.prepare(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

# initialize a bucket
my_bucket = S3.bucket('my-bucket')

# some json string
my_json_str = "{'a': 1, 'b': 2}" # an example json string

my_bucket.put('json_data_1', my_json_str)

data, metadata = my_bucket.get('json_data_1')

Enter fullscreen mode Exit fullscreen mode

Conclusion

I hope that you find this as useful as I did! Let me know what you think in the comments below.

If you're writing code for cloud applications, you need to go when things go wrong. I built CodeLighthouse to send real-time application error notifications straight to developers so that you can find and fix errors faster. Get started for free at codelighthouse.io today!

. . . . . .
Terabox Video Player