Serverless functions are one of the ways we build and deploy web services, with AWS Lambda being one of the biggest players in this field. For some solutions, you can deploy functions that work as APIs without having to build a microservice with multiple endpoints. However, if you and your team have your tools and workflow built on containers, then the Lambda serverless function paradigm might not be so accommodating for you. However, in December 2020, AWS announced container image support for Lambda. With this feature, you can package and deploy Lambda functions as container images of up to 10 GB in size. But this still has the limitations of execution timeout and not being portable to run the same function on other cloud platforms.
However, there are other ways you can run scalable containerized applications on a managed serverless platform. You can do this with OpenFaaS, OpenWhisk, Knative, and other Knative offerings such as Google Cloud Run. With those, you can deploy your containerized applications and they can autoscale based on demand, even scale down to zero.
Having tried OpenFaaS and Knative myself, I realised that they share the concept of running any container image as long as it can receive HTTP traffic. I recently learnt about DigitalOcean App Platform, which has support for running container images. Although it is not marketed as a serverless function product, the fact that it allows me to run any container image means that I can also deploy the functions that I run on OpenFaaS or Knative on it. With this approach, I don't have to worry about vendor lock-in.
In this post, I'm going to show you how to deploy a containerized Node.js FaaS on DigitalOcean App Platform. You can do a similar thing for Python and Go using the approach I’ll show you.
Prerequisite
In order to follow along, I expect you to have the following:
A DigitalOcean account. You can get one now with $100 free credit using my referral link.
Doctl (DigitalOcean CLI) installed and configured. Check the documentation for how to install and configure doctl.
You will need DigitalOcean Container Registry because you can only deploy container images from it. Let's create one using doctl. Open your command-line application and follow the instruction below:
- Run
doctl registry create <my-registry-name>
to create your registry. Replace<my-registry-name>
with the name you want to give to your registry. - Authenticate Docker with your registry by running
doctl registry login
.
Create The Function
We will create a function that will be packaged in a container using faas-cli, the CLI for use with OpenFaaS. Although the CLI is designed to be used to build and deploy functions to OpenFaaS, it has worked for me outside of OpenFaaS. I use it with Knative and now DigitalOcean App Platform. It has allowed me to write a handler file and the CLI does the rest to create a Docker image and upload it to a registry.
You can install it by running brew install faas-cli
or using cURL by running $ curl -sSL https://cli.openfaas.com | sudo sh
. If you're on Windows, install it by running the PowerShell command below:
$version = (Invoke-WebRequest "https://api.github.com/repos/openfaas/faas-cli/releases/latest" | ConvertFrom-Json)[0].tag_name
(New-Object System.Net.WebClient).DownloadFile("https://github.com/openfaas/faas-cli/releases/download/$version/faas-cli.exe", "faas-cli.exe")
faas-cli
works with templates stored in Git repositories to create the necessary files needed for the functions. Currently, there are templates for Node.js, Python, Go, and C#. You can find the standard templates in github.com/openfaas/templates.
Let's generate a new function using the node12
template. Open your command-line application and run the command faas-cli new --lang node12 --prefix <registry-url> do-demo
. Replace <registry-url>
with your container registry endpoint. You can get your registry URL by running the command doctl registry get
. The name of the function is do-demo
.
When the command has executed, it will generate a file do-demo.yml
and a folder do-demo
. The do-demo
folder has the files package.json and handler.js. The handler.js is what we will work with. The content of the file is the same as what you see below.
"use strict";
module.exports = async (event, context) => {
const result = {
status: "Received input: " + JSON.stringify(event.body),
};
return context.status(200).succeed(result);
};
This function format is similar to what you use if you have worked with AWS Lambda. You can get the request headers, path, params and body values from the event
object. You can use the context
object to specify the response data, HTTP status, and also the response content type.
Let's update the result object to the following.
const result = {
body: event.body,
path: event.path,
};
Build and Deploy The Function
We will build and push the container image for this function using faas-cli. Open your command-line application and navigate to the directory where you have do-demo.yml
. Then run the command faas-cli build -f do-demo.yml
to build the image. When the build is complete, then push the image to the registry with the command faas-cli push -f do-demo.yml
.
To deploy the function to App Platform, go to the Apps page and follow the instructions below.
- Click Create App to create a new app. If you have no app deployed, you'll see a Launch Your App button instead of a Create App button.
- Select Container as the source for your deployment, and choose the image you would like to use from the Repository drop-down then click Next.
- On the next page, give your app a name, select a region and then click Next.
- On the Configure your app page, leave the default values and click Next.
- Select a plan and the specification to run your container, then click the Launch button to complete the step.
When the app is ready, you should get a URL that will be used to access it, for example, https://do-demo-dg3xa.ondigitalocean.app/. You can test that the app works by sending a request with cURL.
curl --header "Content-Type: application/json" \
--request POST \
--data '{"username":"xyz","password":"xyz"}' \
https://do-demo-dg3xa.ondigitalocean.app/login
You should get a JSON response of {"body": {"username":"xyz","password":"xyz"}, "path":"/login"}
. There you have it, your function served as API 🚀.
You can return any response type, set a HTTP status code and the content-type. For example, to return a 201 status code with a pdf document, you can set context as:
const pdf = await createDocument();
return context
.status(201)
.headers({
"Content-type": "application/pdf",
})
.succeed(pdf);
You can do path-based routing as follows:
module.exports = (event, context) => {
if (event.path == "/users") {
return users(event, context);
}
return context.status(200).succeed("Welcome to the homepage.");
};
function users(event, context) {
return context.status(200).succeed(["Jean", "Joe", "jane"]);
}
Can I Add npm Packages
It is possible to add npm packages and do whatever you want in the function. For example, you can add stripe packages and use it for your stripe webhooks. However, I've been asked if it's possible to use Express.js with it. I think it is possible but I've not tried it and I won't encourage you to do that. You should rather package your microservices with Express.js (or some other framework) in a container and deploy it to App Platform with a Dockerfile specification. I'd like to point out that it's possible to do faas-cli new --lang dockerfile ....
to generate a Dockerfile and other files that can be used with faas-cli to build and publish your images.
If you want path-based routing and have a few routes to handle, you can read the path from the request and call the necessary function to process the request. Otherwise, create a Dockerfile and deploy to App Platform.
Does It Scale?
The simple answer is, IT DOES. But that depends on the plan you're using and how you want it to scale. You can do vertical scaling to increase the amount of CPU and memory allocated to the container instance, or horizontal scaling to add more containers and it will be load-balanced.
Why Not AWS Lambda or Google Cloud Run?
There are different platforms that you can run a similar FaaS container image. With AWS Lambda, functions deployed as container images benefit from the same operational simplicity, automatic scaling, high availability, and native integrations with many services. However, you're locked-in to using their base runtime image and can't easily port to a new cloud provider. You are also limited to a maximum of 15 minutes execution time. This means that it's not suitable for data-intensive workloads that may run for more than 15 minutes.
Google Cloud Run is a good alternative to deploying functions as container images. You can deploy the same container image to any other Knative platform offerings like IBM Cloud Code Engine, Red Hat Openshift Serverless, and on Digital Ocean App Platform as I've shown you in this post. There's no execution timeout duration, and it auto-scales to handle demands. You are charged for the requests and how long it takes to run your function.
When deciding on which platform to run functions in containers, I'd start out with Cloud Run because of the subscription model. But if it receives a lot of traffic and it's a long-running task, I'll consider moving to DigitalOcean App Platform because of the predictable pricing model. I can also decide if I want horizontal or vertical scaling with DigitalOcean App Platform.
However, If I have a couple of these functions and they get high loads at different times during the month, I'd prefer to run my own managed serverless using Knative on DigitalOcean Kubernetes. With that, I can run images in any container registry, unlike using the App Platform where I'm restricted to using images on DigitalOcean Container Registry.