Ah Serverless… it’s the golden child of software engineering right now, and the internet is full of Serverless and AWS Lambda success stories. But actually the golden child of software engineer is harbouring a few secrets…
Yep, that’s right, Serverless isn’t as good as the marketing pages lead you to believe. That’s not to say Serverless is bad technology — I absolutely love Serverless. But on a few occasions whilst working with it, I felt kinda duped.
By the end of this article you’ll understand some of the limitations of AWS Lambda and how some features like, DDOS and memory leaks really work.
Before we jump in I wanted to preface that the article is based mainly on AWS Lambda and these misconceptions are quite specific to AWS. The topics may be similar to the other cloud providers but today we’ll be focusing on Lambda.
1. Functions Scale Independently
Belief: With Serverless each service scales independently, increased load on one function will not affect others.
It seems logical to think that functions are truly independent. After all, that’s one of the promises of microservice architecture, right? Split your application down and the independent parts can scale… independently.
But, due to AWS limits functions don’t scale in a truly independent way. Why? Because AWS accounts come with a default limit of 1000 concurrent lambda executions. Which might seem fair enough to you but let’s think about the implications of how the limit can affect us in ways we might not imagine…
Imagine you have an internal—yet public—API for recording metadata in your company that lives in the same region and AWS account as your production functions… I mean, why wouldn’t it be? And that API starts to take heavy load. When the concurrency limit is reached, no more functions will be created. Effectively causing performance issues across all functions in that region.
You can request for your limit to be increased in a region or reserve concurrency for a given Lambda … but there will always be a limit that you’re in danger of reaching. The only real solution is to break down each service into its own AWS account using AWS organisations so each service has its own limits, but breaking services down like this has an operational overhead.
2. Functions Ramp up to Meet Any Load
Belief: Serverless can scale fast enough to meet any workload (without any fancy tweaks).
You may think that since you’re using Serverless that you don’t need to do capacity planning and that any size workload will be responded to quickly and effectively. But again in some scenario’s that’s not the case.
Lambda can only spawn 500 new instances per minute. Which means for bursting traffic response times can perform negatively. Not only are spawning limits applied but each new request will also hit a cold container and suffer a lambda “cold start” (the time it takes to launch the container initially).
As before there are some ways around the scaling limits such as using provisioned concurrency. But wasn’t the whole selling point of Serverless that we didn’t have to think about these things anymore? Oh, and of course provisioned concurrency costs more money.
3. Functions Can’t Be DOS’ed
Belief: Running Serverless means DOS (Denial Of Service) attacks aren’t really a concern, if an attack is initiated the function scales to infinity and we’re protected.
Because of the points we discussed in parts one and two, Lambda functions aren’t immune to DOS attacks. In fact if an attacker knows that you’re using AWS Lambda they can use these scaling limits against you as part of their attack, ensuring that their attacks are short and sharp rather than prolonged.
There’s no real solution for the DOS problem other than having failover to different regions or AWS accounts and enabling the best DOS protection you can get your hands on.
4. Lambda’s Don’t Have Memory Leaks
Belief: Since Serverless runs on ephemeral functions that means you don’t have to worry about memory leaks.
Because you’re running ephemeral functions you would be forgiven for thinking that functions cannot have long-running problems like memory leaks, or one function state affecting another, but it’s not what you’d think…
Serverless can have memory leaks, and state stored globally in one function can leak into the next invocation and cause issues if you’re not careful.
The idea of Serverless having memory leaks can be confusing initially, but when you wrap your head around how the Serverless infrastructure works, it does start to make sense. Let me show you what I mean…
The way Serverless works is by spawning containers. Under certain load conditions a new container is launched and then kept around for as long as it’s used. When load dies down the containers are shut down again.
For the lifetime of any of these containers the serverless function maintains state. Any values stored globally or memory that is leaked will be passed through to the next function.
There’s no real mitigation for memory leaks—you just need to be aware of them incase function performance starts to degrade. You may also want to monitor the age of the underlying container to help with your debugging.
5. Lambda’s Utilise Single-Thread Processing
Belief: Serverless can make use of asynchronous processing such as single-threading async in JavaScript.
If you’re using something like Node.JS you may be familiar with the idea that tasks can be processed concurrently whilst waiting for other async tasks to finish. However, the way AWS Lambda works is by only serving one request at a time. Which means that your Lambda functions can be sat “idle” whilst processing.
But every cloud has a silver lining… because of the fact that Lambda only serves one request per lambda, you can do some interesting things. For instance you can store request information in the lambda global context. Storing request information in global context can be useful for things like monitoring where you want to add request context to your logs or events.
Busted: 5 Serverless Misconceptions
And that’s all for today! We’ve covered five misconceptions that you might have about Serverless and AWS Lambda functions busted.
I’m not anti-serverless, quite the opposite. And these problems are in fact well documented by AWS I just don’t see these areas written or spoken about a lot. But I think the community deserves a full picture, and not just the overly positive accounts of how amazing Serverless is.
Regardless — I hope that you found something in this article about Serverless and AWS Lambda that you didn’t know before. And now you’re going into your Serverless adventures better informed.
Speak soon Cloud Native friends!
Are there any misconceptions you had with Serverless that turned out not to be true?
The post Misconceptions of Serverless: 5 Things You Thought AWS Lambda Did… but It Doesn’t. appeared first on The Dev Coach.
Lou is the editor of The Cloud Native Software Engineering Newsletter a Newsletter dedicated to making Cloud Software Engineering more accessible and easy to understand. Every 2 weeks you’ll get a digest of the best content for Cloud Native Software Engineers right in your inbox.