This glossary provides definitions of terms and phrases used in the context of building serverless-based applications. Before we get started, if youβre not sure what the term βserverlessβ means, read this first.
These are my personal definitions and are influenced by the ecosystems I am most familiar with (e.g. AWS). They may not necessarily represent the generally understood definition.
API Gateway
A fully-managed service provided by AWS that allows developers to create, publish, maintain, monitor, and secure APIs that automatically integrate with other AWS services such as Lambda and SQS. (Official site).
Architect Framework
A development framework for building and deploying serverless applications on AWS using a custom high-level declarative syntax. (Official site).
Azure Functions
Microsoft's FaaS offering. (Official site).
Cloud Native
An approach to building software applications that are entirely composed of services and infrastructure provided by a cloud vendor as opposed to using on-premise resources.
Cloud-based Development
A workflow whereby a developer runs their in-progress code in the cloud as part of their standard development process as opposed to only running it on their local machine before completing a development task. An example of this is a developer deploying and invoking a Lambda function within a cloud AWS account.
This workflow is almost mandatory when building serverless apps but is uncommon with more traditional apps. An advantage is that it forces developers to run their code in an almost identical environment to production at an early stage, reducing risk of integration bugs. Disadvantages are the time overhead with current tooling in deploying a function before a developer can test it and the fact that developers can't work offline.
Cold Start
A phenomenon which occurs the first time a cloud function is invoked, causing it to take a longer time to complete than subsequent (warm) invocations. This warm-up time is caused by several factors, including the time required for the cloud provider to provision the underlying container which hosts the function and also the time required to load any function-specific dependencies into memory.
Cold starts are common to all FaaS providers and are a common reason given as to why FaaS isn't a good fit for workloads which require extremely low latency consistently.
Configuration Management
The practice (by developers and ops engineers) of specifying, versioning and distributing configurational attributes of component resources within an application.
Container
"A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another." (source)
In a serverless context, cloud providers use containers under the hood in order to provision and run a function. A container is reused to service multiple function invocations and is usually kept running for a period of a few hours (although there's no guarantee of this), in order to minimise the impact of cold starts.
Deployment Framework
A development framework that makes it easier to configure and deploy cloud resources such as functions, APIs and event triggers.
π Related: Why serverless newbies should use a deployment framework.
DynamoDB
A fully-managed NoSQL database service provided by AWS.
Its key benefits centre around its built-in managed scaling, security, backup/restore and high throughput that takes away a large chunk of management tasks that come with operating traditional databases.
DynamoDB is proprietary to AWS and therefore cannot be used on other cloud platforms.
(Official site).
Event-driven Architecture
An architectural pattern whereby a software system reacts to and creates its own events. This pattern almost comes out-of-the-box with serverless systems. AWS Lambda, for example, enables to you to configure different types of events (e.g. file added to an S3 bucket, a scheduled CRON job) which automatically trigger the invocation of a function. Typically the cloud provider manages the hooks for listening for an event and the developer is responsible for wiring up these hooks to actions (e.g. Lambda functions).
FaaS
Function-as-a-Service is a mechanism for running custom code in the cloud without first needing to provision a dedicated server to host the code. A FaaS function is very similar to a function in the more general programming sense in that it takes an input as a parameter (often an event payload) and allows the developer to execute their custom code and send a response back to the caller.
A FaaS function does not require a server process to be constantly running which allows cloud providers to automatically scale them in isolation.
Implementations of FaaS include AWS Lambda, Google Cloud Functions and Microsoft Azure Functions.
Often the term "serverless" is incorrectly conflated with FaaS, but the former is a superset and includes non-custom code managed services.
Fully Managed Service
A fully managed service (in the serverless cloud context) is a service provided by a cloud vendor offering a specific feature set whose operational concerns such as provisioning, patching and scaling are all managed by the cloud vendor and abstracted away from the developer.
Examples of fully managed services in the AWS ecosystem are S3, DynamoDB and Cognito.
Function Concurrency
When under load, FaaS functions need to be invoked concurrently. Concurrent invocations will result in a cold start as the cloud provider needs to spin up a new underlying container to service each new concurrent execution. AWS have a soft limit of 1000 (actual figure varies by region) on the number of functions which it will allow to run concurrently before throttling any subsequent invocations, returning errors to the calling/triggering service.
π Related: Lambda Scaling Calculator.
Google Cloud Functions
Google's FaaS offering. (Official site).
Infrastructure as Code
Infrastructure as Code is the discipline of managing all configuration related to a software system's infrastructure (servers, networking, managed services, etc) using code files which can be stored in a version control system such as Git.
Its practice has several benefits including allowing for multiple pipeline stages to be easily set up with a consistent configuration, facilitating disaster recovery and enabling of peer review of changesets before they are applied to production environments.
Examples of IaC implementations include AWS CloudFormation, and Terraform.
Lambda
AWS's FaaS offering. (Official site).
Managed Scaling
An attribute of a managed service whereby an engineer does not need to provision a certain level of capacity ahead of time for the service because the cloud provider does the scaling for them. Examples of services with managed scaling are Lambda, API Gateway and DynamoDB.
There are some cloud services that do provide hooks for scaling but which aren't automatic and thus require engineers to do some capacity planning in order to trigger scaling when required, e.g. AWS Elasticache, AWS ElasticSearch Service. In this respect, managed scaling can be seen as a spectrum rather than a discrete state.
Microservices
Microservices are an approach to software architecture whereby the system is divided into small chunks of functionality that are deployed independently and communicate with each other typically over HTTP.
In a FaaS application you almost get a microservice architecture by default as the unit of deployment (a function) is inherently small and can be deployed on its own.
π Related: Serverless Microservice Patterns for AWS.
Monolith
A monolithic software system maintains all functions of the application within a large single unit of deployment.
It is possible to develop monolithic applications using FaaS (e.g. a single general purpose function which handles all your API/web app requests) but this approach is not common and generally considered bad practice. It is the opposite of a microservices architecture.
Multicloud
Multicloud is an architectural approach of using cloud services from different vendors within the same distributed application/workload.
The main benefit of this approach is to reduce reliance on any single cloud provider and mitigate risks around vendor lock-in. A major drawback of this approach is the high costs of implementation caused by the lack of standardisation between similar services on different clouds.
PaaS
Platform-as-a-service is a category of cloud services which allow developers to deploy their applications to the cloud without managing the infrastructure on which their code is deployed. This is similar to serverless applications but the key difference with PaaS is that under-the-surface the cloud provider does actually provision virtual servers to host these apps and thus the scalability of the application is restricted. Another key difference is the pricing model of PaaS services tend to be hourly rather than per usage.
Examples of PaaS services are AWS Elastic Beanstalk and Heroku.
Per Usage Pricing
Per usage pricing is when a cloud customer is charged based on how many times they used a specific service and not based on a fixed time period. For example, AWS Lambda charges based on the count and duration of each function execution.
π Related: How to calculate the billing savings of moving an EC2 app to Lambda.
Paying for Idle
Paying for idle is what cloud customers have to do when they are using a service such as EC2 which bills users based on their resource being turned on to service requests, even if it is not currently receiving any traffic.
Secrets Management
Secrets management is a subset of configuration management concerned with how secrets (e.g. passwords, API keys, DB connection strings) are securely stored and distributed to resources within a system.
Serverless Application Model
SAM is an Infrastructure as Code framework developed by AWS specifically focused on serverless applications.
Serverless Application Repository
A repository provided by AWS which allows developers to discover and deploy reusable serverless applications (or components of applications) into their own cloud accounts.
(Official site).
Serverless Framework
The Serverless Framework is an Infrastructure as Code framework specifically focused on serverless applications. It allows developers to use YAML or JSON to configure functions and other resources and deploy them to AWS, Azure or Google Cloud.
(Official site).
Serverless-first Architectural Approach
"Serverless-first" is an approach to building software applications by using FaaS and cloud services by default unless a limitation arises which would require use of a server-based system to deliver a specific functional or operational requirement.
SSM Parameter Store
This is a service provided by AWS to help with configuration management. It provides a centralised key-value store to allow infrastructure and application configuration data (including secrets) to be stored securely and accessed via both Infrastructure as Code (at deploy time) and application code (at runtime).
Stage
A stage is an isolated deployment of an entire software system representing a specific point in the release cycle of the system. The term is often used in the context of an automated continuous integration/continuous delivery pipeline and is sometimes referred to as "environment".
Examples of stages are development, test, staging and production.
Step Functions
Step Functions is a service provided by AWS which offers a way of orchestrating common workflows composed of multiple Lambda functions or cloud service events, without needing to write plumbing code to handle the workflow state and retry logic, etc. (Official site).
Total Cost of Ownership
The TCO of a software application is the sum of all costs required to build and operate it over its lifetime. It includes the cost of paying all engineers (developers and ops) and supporting staff as well as the cloud bill and any other operational expenses.
Costs to be considered should be both the actual realised costs and also the risk potential of future costs given specific design decisions.
π Related: You are thinking about serverless costs all wrong.
Undifferentiated heavy lifting
Undifferentiated heavy lifting is a term often given to engineering activities that can be time consuming and/or requiring an expensive skillset but that do not differentiate the company from its competitors in key areas of its business. Such heavy lifting is often viewed as a necessary cost of doing business and includes tasks such as managing servers either in-house or in the cloud.
π Related: Concerns that go away in a serverless world.
Vendor lock-in
Vendor lock-in is whenever a software application is designed such that certain components of the architecture are designed for a specific cloud provider and would have a significant cost associated with moving to another cloud provider.
This risk is slightly higher with serverless applications than with traditional server-based apps as they tend to make more use of proprietary cloud services and databases which cannot be exactly replicated on another provider's cloud.
π Related: FaaS and vendor lock-in β genuine concern or FUD?.
Workload
A software artifact with specific functional and operational requirements that is run on a cloud provider. A workload is somewhat synonymous with the term "application" but more generalised and without the implication of any user-facing component (e.g. UI, API).
I'd love to hear your feedback on this list, in particular if:
- you think I should add an important term/concept to the list
- you have a link to an article that you consider to be a canonical authority on a particular concept in the list that I should link out to.
If so, please just leave me a comment below.
π If you enjoyed this article, you can sign up to my weekly newsletter on building serverless apps in AWS.
Originally published at winterwindsoftware.com.