Photo by Johannes Waibel on Unsplash

Serverless is TOUGH but GREAT!

Breaking down the tough parts of Serverless when getting started, and also what makes it so great once we overcome the initial learning curve; including descriptions and visuals.

Introduction

This is a relatively small and quick post on my own recent musings about where we are with Serverless technologies in 2022; including the parts that people typically find tough when starting out, and the great parts once we have burst through that early learning curve (The ‘Serverless Dunning-Kruger effect’ as I have often called it; low barrier to entry, steep learning curve to production).

Before we go any further, the following article covers exactly what ‘Serverless’ is; giving basic real world analogies to describe its many facets:

Serverless is TOUGH!

So let’s first breakdown why I personally think Serverless as a paradigm is usually tough for people to get their heads around when starting out, and some of the ongoing parts which we need to work on as a community in my opinion to make this easier in the future.

[T] Testing. With Serverless we typically change the way we develop, test and debug our services compared to more traditional dockerised containers and architectures, i.e. a move to smaller independent services which are cloud offerings (for example Lambda functions, Step Functions, Cognito, S3 etc..). This can make local development more difficult for teams; and many now develop, test and debug directly against the cloud provider due to the complexities of doing this locally (rather than the complexities of trying to setup a local version of the cloud provider using tools such as Localstack which people picking up Serverless tend to reach for). When starting out with Serverless this paradigm shift can be tough to understand and work with.

“This still in my opinion is not intuitive and natural due to the tools we have at present.”

Local development is a minefield and an anti pattern with Serverless; with a move to testing and debugging in the cloud with ephemeral environments

“Services such as Serverless Cloud and SAM Accelerate make this experience a lot easier, although I do personally think this is an area that AWS need to focus on more i.e. make this baked in, natural and super intuitive”

Keywords: Testing . Debugging . Development

[O] Observability. Observability is typically more complex in Serverless applications due to the distributed nature of our smaller components, functions and services; which are typically also cross account. Each of the services we use publish their own logs and traces; and tracing across many services and cross AWS accounts for a given workload is still not easy to reason with and visualise at scale. This can be tough for newcomers to Serverless to understand and work with.

Observability is more difficult in serverless solutions due to the distributed nature compared to containers and more traditional architectures

“As we start to move into the enterprise space we need to utilise 3rd party observability services more, ensure developers add correlation and causation IDs, tracing is enabled, and ensure security logs are immutable and aggregated to a single account etc (CloudTrail, VPC Flow logs..). This still isn’t where I would expect it to be in 2022, and this feels like something we should just get out of the box in this day and age with cloud providers in my opinion”

Keywords: Observability . Tracing . Logging

[U] Unpredictable. Serverless solutions are typically ‘event-driven’, which makes them harder to predict and work with compared to more traditional container based solutions, and in their very nature they are more unpredictable when compared together. This means that developers now need to think about dead letter queues, replaying events, eventual consistency, limits, saga patterns, circuit breakers, storage first patterns etc; which more traditional architectures didn’t need to reason about typically.

Event-driven serverless solutions are less predictable due to eventual consistency, retries, backoffs etc

Keywords: Unpredictable . Event-driven . Distributed

[G] Guardrails. Serverless solutions are a distributed set of smaller services which are glued together with integrations, which has the downside of increased service boundaries and attack surface for bad actors, and more potential to actually cause ourselves large usage bills due to mistakes in configuration or poor design. Each of the services we build together create their own security boundary where we can get the security controls wrong, which when extrapolated across a full solution can make this complex to get right. This is why we need sufficient guardrails in place to prevent these issues.

Increased attack surface with serverless solutions due to the many threat boundaries

“I still can’t believe to date there is no way of setting a max cost limit on accounts. Yes we have budget alerts, but they can easily be missed (especially if set low and already breached).”

Keywords: Guardrails . Security . Denial of Wallet . DDoS

[H] Hard. Serverless as a paradigm is hard to grasp for most when they start learning it, and although we can now use reusable service ‘building blocks’ glued together with events and integrations (S3, DynamoDB, EventBridge..), we now have the cognitive load of understanding their own service unique properties, limits, specific configurations, and how/when we should integrate them together. We then also need to think about the fairly complex world of IAM on top of this (roles, security groups, policies..) etc. This is a huge change for non-serverless developers, and this is where most people spend time in high cognitive load.

“Going from a dockerised express app to each controller having its own ephemeral separate Lambda function is a massive mindset change for developers in its-self, without coupling this with the security model and further integrations with other AWS services too.”

The complexities and cognitive load of serverless solutions compared to more traditional architectures

Keywords: Hard . Complexity . Learning

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

Serverless is GREAT!

Now let’s cover why I believe serverless is great, and why the industry has moved to this new way of building applications as ‘the norm’.

[G] Growth. The most talked about advantage of Serverless is its ability to scale and grow to unreal figures, most of which would be suitable for the largest companies on the planet. The scale is also typically something we don’t need to manage or operate ourselves, and happens at insane speed. This lets developers focus more on delivering value to customers, rather than focusing on the challenges of scaling unpredictable and spiky workloads.

Serverless solutions can typically scale up indefinitely, or at least to numbers that would be suitable for some of the biggest companies globally

Keywords: Growth . Scale . Speed

[R] Resilient. Serverless services are event-driven and distributed as we discussed above, which means that they need to be massively resilient to failures. What we do have at our disposal at a service level is things like retries, dead letter queues and event archives (EventBridge) which can help manage any errors. We also have multi-AZ and even multi-region service capabilities to easily and natively help us deal with resilience at scale in our overall solutions. (Think Global Tables and Global Endpoints etc).

We can deploy our solutions multi-AZ and globally to increase resilience of our Serverless solutions

“One key service which is not multi-region is Amazon Cognito which I am really surprised with in 2022.‘

Keywords: Resilient . Retries . Redundancy

[E] Economical. Another well-documented advantage of Serverless is that it is economical as it is ‘pay for use’, and we don’t have services running 24/7 even when they are not being used. This is what we typically had in the past with historical workloads. This also makes Serverless more sustainable for the planet which is only a good thing (scale to zero).

Serverless technologies scale down when not in use, meaning most of them are pay for what you use.

Keywords: Economical . Pay for Use . Sustainable

[A] Attention. Serverless allows our teams to focus their attention on delivering value to their customers at pace, and less time on maintaining and operating services. This is due to the shared responsibility model, where the cloud provider takes on the onus of maintaining the services, and development teams focus on delivering value.

https://docs.aws.amazon.com/whitepapers/latest/security-overview-aws-lambda/the-shared-responsibility-model.html

“This shared responsibility model can help relieve your operational burden, as AWS operates, manages, and controls the components from the host operating system and virtualisation layer, down to the physical security of the facilities in which the service operates.”

Keywords: Attention . Shared Responsibility . Speed . Agility

[T] Together. Teams can now quickly use reusable building blocks together rather than reinventing the wheel as we did back in the days of more traditional container based solutions. This reduces cognitive load on development teams, allowing them to glue together the myriad of tried and tested specific services which cloud providers have to offer (S3 for object storage, Cognito for auth, Step functions for workflows etc etc). In an enterprise we can then start to build common ratified reusable patterns and services using the AWS CDK and custom L3/L4 constructs, meaning ‘repeatable architectures’ which increase speed and further reduce cognitive load on teams.

We use services like building blocks, and can also publish these patterns as constructs for other development teams

“Cognitive load refers to the amount of working memory people use. There are three types of cognitive load: intrinsic cognitive load is the effort associated with a specific topic; extraneous cognitive load refers to the way information or tasks are presented to a learner; and germane cognitive load refers to the work put into creating a permanent store of knowledge”

Keywords: Building blocks . Together . Reuse . AWS CDK

Summary

OK, OK, the acronyms were a little loose there, and traditional architectures doesn’t necessarily mean Docker and containers; but hopefully that has given enough context as to why there is a low barrier to entry with Serverless when getting started which is awesome, but the knowledge required to do this at a production level means a steep learning curve (which can be tough). Once that learning curve has been tackled, Serverless wins on every front in my opinion, and is the ‘go-to’ now for most developers as it is so great.

Some key areas that I believe cloud providers need to focus more on is:

✔️ Ease of development, testing and debugging of Serverless solutions; with instant developer deployments and feedback when working against the cloud.

✔️ Best of class ‘cross-account observability’ for Serverless workloads which is part of the eco system as standard without jumping through many hoops i.e. built in and intuitive for engineering teams.

✔️ The ability to setup a max spend on your accounts to protect individuals from racking up huge cloud costs whilst learning new services, or when hacked through poor configuration choices.

Wrapping up 👋

Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Enterprise Serverless Architect (GESA) based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lee James Gilmore

Principal Serverless Engineer | Enterprise Cloud Architect | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀