In my last article, I talked about what an API gateway is, and some things you should consider when choosing the right gateway solution for your organization. Today, I thought I'd dive a little deeper into one of those solutions, AWS's API Gateway.
Why API Gateway?
I've used API Gateway on a few projects, and it's a great option, especially if you're going to be using other AWS services. It provides fine grained control of your API endpoints, giving you a single point of entry for your API. You can define rules for which endpoints point to which services and which users have access to which resources. API Gateway can handle authentication using IAM, Cognito user pools or custom authorization functions.
The Drawbacks
Despite all of it's great features, there are a few things about API Gateway that you need to be aware of. One of the biggest drawbacks of API Gateway is also one of it's biggest benefits. Since it's a fully managed service, you don't have to worry about managing the underlying hardware, but that means you also don't have the ability to do any customization or performance tuning.
There are also some quotas and limits that Amazon imposes on API Gateway that you need to be aware of. For example, you're limited to 600 APIs per account, and 300 routes per API. There's also a throttle of 10,000 requests per second. Some of the limitations can be increased on request, but others can't. Most of the limits are fairly high and probably won't be an issue for most organizations, but they are something you'll need to keep in mind. Checkout the API Gateway docs for the full list of limitations.
Setting Up API Gateway
With that out of the way, let's take a look at how to setup API Gateway. For this example, I've got two Lambdas set up. They're just simple functions that return some static data, but what they do isn't really important for our purposes. We just need something to call from our gateway.
To get started we first have to choose which type of API we're going to create. There are currently four types to choose from: HTTP API, WebSocket API, REST API, and REST API Private.
So, what's the difference? We'll start with the easy ones. The WebSocket API obviously stands out on it's own. It would be used when you need a persistent connection to your backend services. This would be good for something like a real-time chat app, or a dashboard that needs to be updated continuously. The only difference between the REST API and the REST API Private is that the latter is only available within a VPC.
That leaves us with HTTP API and REST API. These are the two that most organizations will be choosing from. The descriptions on the selection screen aren't very useful, but luckily the docs have a page that breaks down the differences between the two. The TL;DR is that the HTTP API is the newer service. It's optimized to provide low-latency integrations with AWS services, and provides support for OAuth 2.0 and OIDC authorization. There's also built-in support for CORS and automatic deployments. The REST API, on the other hand is the older service, and while it doesn't support some of the new features in HTTP, it supports quite a few other features that the HTTP API doesn't support yet. Take a look at the docs for full details. For this demo, we'll be using the HTTP API.
To create the API, you can choose to import an OpenAPI 3 definition, if you have one, or you can build it from scratch, which is what we'll do.
Step 1: Add Integrations
In the first step, we'll add the integrations with our Lambda functions. You can also integrate with any HTTP endpoint, so we'll throw in an extra integration with a test API just for fun.
Step 2: Configure Routes
In step two, we'll add our routes. We'll create four routes in this case: a GET route to hit our users
lambda, GET and POST routes to hit our posts
lambda, and a GET route to hit the photos
HTTP endpoint. You can create as many routes as you need, and each one can hit whichever integration is appropriate. In some cases, you may have one lambda integration for each route, or you may have several routes that all point to the same lambda. You can also specify which methods each endpoint supports. If this was a real application with the normal CRUD functionalities, we might create routes for the GET, POST, PUT and DELETE methods for each endpoint, or we could delegate the responsibility for determining the response to the lambda using an ANY method.
Step 3: Define Stages
Next we'll define the stages for our API. Stages are individual environments that you can deploy your API configuration changes to. We're going to create two stages: dev and prod. We'll turn on the Auto-deploy option for the dev environment, but leave it off for prod. This gives us the chance to test our changes in dev, then manually deploy them to prod once we're satisfied that everything works properly.
Step 4: Review and Create
The final step in the build process just gives you a chance to review your configuration before your API is created. If you see anything that needs to be changed, you can click the edit button for that section to make updates. Everything looks good, so we'll go ahead and hit Create.
The Gateway Dashboard
Once our API is created, we're redirected to the Gateway Dashboard.
As you can see, we now have access to a lot more configuration options. Let's take a look at a few of these.
Authorization
You have the option of adding an authorizer for each method of each endpoint. This fine grained control allows you to specify who has access to which of your services. In this example, we may want to allow anyone to get posts, but only authorized users to create them.
Now when we test our endpoints, you can see that the GET /posts method returns the array of posts, while the POST /posts method returns a 403 Forbidden status.
This section provides several options for authorizing requests. You can use the built-in IAM authorizer, or you can create and attach a new authorizer. Authorizers can be one of two types: JWT authorizers or Lambda authorizers. A JWT authorizer is used in conjunction with OpenID Connect or OAuth 2.0, while a Lambda authorizer allows you to create custom authorization functions. For more details on each type of authorizer check out the access control section of the API Gateway documentation.
CORS Configuration
Another important feature in API Gateway is the ability to easily configure Cross-Origin Resource Sharing (CORS) settings. CORS is a security feature in browsers that prevents applications from accessing resources from a different domain. If you're API is on a different domain than the clients accessing it, you'll need to enable CORS. In API Gateway, this is as easy as filling in the fields under the CORS section with your desired settings.
If your API is only used by your own applications, you can specify those origins in the Access-Control-Allow-Origin field. If your API is public, you'll need to use the wild-card (*) in this field. For details on each of the fields in this section, take a look at the CORS section of the docs.
Throttling
API Gateway allows you to add throttling to your API to prevent things like DDoS attacks or abuse of your resources. There is a built-in 10,000 requests/second limit per region, but you also have the option to add throttling within your API configuration.
From the API Gateway UI, you're able to configure throttling for your entire API for each stage. You can also configure throttling per route, but you'll need to do this through the API or an SDK currently.
Monitoring
The final feature we'll look at is monitoring. This includes the Metrics and Logging sections. Both of these sections contain toggles for turning on or off the features in CloudWatch.
Metrics records the activity on your API and allows you to create charts and graphs to help visualize patterns. You can also setup alarms in CloudWatch to notify someone if your metrics reach a certain threshold. For example, if your API returns more than ten 5xx errors in a 5 minute period, an alarm could trigger an email to your DevOps team.
Enabling logging will write an entry to your specified log group for every request your API receives.
And that's it! We now have a functioning API Gateway with multiple routes, pointing to multiple backend resources. We've enabled authentication on select routes, setup our CORS configuration, and turned on throttling to help protect our resources. And we've enabled monitoring and logging, so we can keep an eye on the health of our API.
As you can see, API Gateway is super easy to setup, integrates easily with other AWS services, and it's a fully managed service, so there's no hardware to maintain or software to keep up to date. It's a great option if your organization is already using AWS services, or if you need a fully-featured gateway that can be up and running quickly with no hardware setup or maintenance.