
Serverless Private APIs — Part 1 🚀
How to allow private serverless platform APIs to communicate securely internally within your organisations without needing to traverse the public internet. Including visuals and code repository written in TypeScript and the AWS CDK.

Introduction
There are many times when architecting serverless solutions that you find you have a secure internal platform which is used by multiple domains, and there is no need for the traffic flowing to it to traverse the public internet. Typically this will be a machine to machine flow between domain services. This article will discuss an approach using API Gateway, VPC Endpoints and Lambda, deployed using the AWS CDK, with code examples that can be found here.
For more information on ‘Machine to Machine’ flows using a Client Credentials Grant Flow check out this separate article:
Part two of this article can be found here.
Which API layers are we discussing? 💭
Lets have a look at an example below, showing the various layers to our serverless architecture:

The diagram below shows:
- The UI Layer. This is typically mobile and website frontends.
- MFE (Micro-frontends). This is typically MFE components or web apps which are embedded in your UI layer.
- MFE APIs. These APIs are well encapsulated and sit behind the MFE components. This allows them to be dropped into other UIs and work without any further development work (other than passing tokens through).
- Public APIs. These APIs tend to sit behind the UI layer, and typically tied down using JWTs (examples being a user logging into your app through the UI which authenticates, returns a JWT, and uses that to authenticate the API calls).
- Platform APIs. Platform APIs are typically private APIs which have well defined REST schemas (OpenAPI/Swagger), and allow other public APIs, platform APIs and MFE APIs to use common functionality. They should not be publicly accessible.
- Data Layer. Finally there is a data layer which underpins most public and platform APIs. Each microservice should have its own data store, typically a database.
In this article we will be focusing on the interactions between these set of APIs below showing in the pink circles, i.e interactions between the Public APIs, and Platform APIs, and MFE APIs (or of course between one or more internal platform APIs):

What does the diagram above look like with typical serverless solutions? 💭
If we take this example architecture layer diagram above and show what it would look like in a fictitious organisation, it may looks like this below:

We can see in the diagram above that we have a set of three APIs which are publicly accessible (Basket, Orders and Customer), and used to facilitate the frontend web/mobile apps. Customers authenticate to use these APIs using JWTs (perhaps through Amazon Cognito).
These public APIs then call through to shared platform APIs, for example payments and orders platforms. These two shared platforms should not be accessible from the public internet (there is no need to be — and they need to be secure), and not called directly from anything other than public facing APIs or internal business systems. All traffic should remain private on the internal AWS network.
These public APIs then call through to shared platform APIs, for example payments and orders platforms. These two shared platforms should not be accessible from the public internet
💡 Note: In reality not all calls would be synchronous like this, and most architectures should be event-driven first in my opinion. That being said, even in event-driven serverless architectures there is almost always a need for synchronous API calls. For more information on serverless event-driven systems check out the following
What is a Private API Gateway on AWS?
So what exactly is a Private API gateway? The following YouTube video by the fantastic Eric Johnson covers this very very well in my opinion:
Let’s discuss some Jargon! 🤪
When it comes to discussing Private API Gateways on AWS it is worth understanding some of the following:
VPC Endpoint
A VPC endpoint enables connections between a virtual private cloud (VPC) and supported AWS services, without requiring that you use an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Therefore, your VPC is not exposed to the public internet.
VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components.
API Gateway Resource Policy
Using resource policies, you can allow or deny access to your private API from selected VPCs and VPC endpoints, including across AWS accounts. Each endpoint can be used to access multiple private APIs.
Private DNS
When private DNS is enabled, you’re able to access your private API via private DNS. If not you will need to use a Route 53 alias or through the VPCE Endpoint itself (with some additional headers).
What are we building? 🏗️
💡 Note: To run the examples you will need two separate AWS Accounts or two VPCs in one account.
We are going to build the following simple example of utilising Private APIs below (taken from our example fictitious serverless architecture diagram above):

This architecture allows users to invoke a public Orders API
to create an order, with the following criteria:
- The Orders API gateway is backed by Lambdas in two private subnets which don’t have public internet access (no Internet Gateways, NAT Instances or NAT Gateways).
- The private subnets in the Orders service have VPC Endpoints for the API gateway service.
- The traffic from the orders Lambdas to the private
Stock Platform API
goes through the VPC Endpoint securely without traversing the public internet.
What do teams typically do in my experience? ❌
I quite often see teams build out the following example architecture:

In this example the Stock Platform API is publicly accessible to anyone (i.e. you can hit it from Postman, your mobile device or a CURL command), and only tied down via API Key. This means that:
- Attackers could try to brute force this API key.
- Attackers could try to DDOS or Denial of Wallet attack the API.
- An internal bad actor could utilise the API Key and access customer data at home without any trace of it.
- API Keys should not be used alone for authentication (see below).
Don’t rely on API keys as your only means of authentication and authorisation for your APIs. If you have multiple APIs in a usage plan, a user with a valid API key for one API in that usage plan can access all APIs in that usage plan. Instead, use an IAM role, a Lambda authorizer, or an Amazon Cognito user pool. — https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
For more information on threats see the following article:
It goes without saying from an architecture layers perspective that the Stock Platform API does not need to be publicly accessible, and should be secure, with all traffic remaining on the internal AWS private network.
What are the limitations of Private APIs? 😔
So let’s discuss some of the limitations of Private API Gateways on AWS before we deep dive further:
Custom Domain Names
Custom domain names are not supported for private APIs. This means you will need to access them via the autogenerated Private DNS API URL, VPC Endpoint (with headers), or Route 53 Alias (see below).
There is a way around this which is massively convoluted, but it is discussed in part 2 of this article.
Private DNS
Private DNS is enabled by default on your VPC Endpoint, however what this means is that if you use a VPC Endpoint to connect to your Private API, you won’t be able to access any public API Gateways at the same time (see below)


When private DNS is enabled for an API Gateway interface VPC endpoint that’s associated with an Amazon VPC, all requests from the VPC to API Gateway APIs resolve to that interface VPC endpoint. However, it’s not possible to then connect to public APIs using a VPC endpoint at the same time.
Because of the private DNS option enabled on the interface VPC endpoint, DNS queries against *.execute-api.amazonaws.com will be resolved to private IPs of the endpoint. This causes issues when clients in the VPC try to invoke regional or edge-optimized APIs, because those types of APIs must be accessed over the internet. Traffic through interface VPC endpoints is not allowed. The only workaround is to use an edge-optimized custom domain name.
Resources in your VPC that try to connect to your public APIs must have internet connectivity.
An example snippet of code is shown below where DNS is enabled on the VPC Endpoint:
How do we work around this and why is it important?
If you are an organisation with multiple platforms and multiple consumers (like our diagram at the start of this article), you will probably find that those consumers need to call out to both private and public APIs at the same time as standard from their VPCs (and will most definitely need to also call out to third party APIs for example fulfilment.acme.com, deliveries.acme.com etc).
We always want to make this as simple as possible for consumers of our internal platform APIs to onboard, and can’t expect them to do a ton of leg work!
There are two main ways to work around this issue without custom domain names which are discussed below:
✅ Resolving this with edge-optimized custom domains
Private DNS settings don’t affect the ability to call these public APIs from the VPC if you’re using an edge-optimized custom domain name to access the public API. Using an edge-optimized custom domain name to access your public API (while using private DNS to access your private API) is one way to access both public and private APIs from a VPC where the endpoint has been created with private DNS enabled
.
Note: This does however mean that as a consumer you then have an issue if you need to access private APIs and non edge-optimized APIs i.e. Public APIs, at the same time — so this is far from ideal obviously…
✅ Private API Route53 Alias
Another way to get around this issue is accessing your private API using a Route 53 alias, and setting the Private DNS to false
, with the alias created for you when you associate a VPC Endpoint with a Private API gateway: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-api-test-invoke-url.html (see example below)

You can then use this alias record to invoke your private APIs just as you do your edge-optimized or Regional APIs without overriding a Host
header or passing an x-apigw-api-id
header.
The generated alias URL to use is now in the following format for a Route 53 alias:
https://{rest-api-id}-{vpce-id}.execute-api.{region}.amazonaws.com/{stage}

We use this instead of our private URL which will no longer work since private DNS is disabled on the VPC Endpoint i.e. this below would not resolve with private DNS disabled
:
https://{rest-api-id}.execute-api.eu-west-1.amazonaws.com/{stage}
If we tried to hit the Route 53 Alias outside of the VPC you will not be able to resolve it:

This then allows you to set the ‘enablePrivateDNS
’ property of the VPC Endpoint to false
, and adding a route in your private subnets route tables pointing to NAT Gateways in public subnets will now work (Importantly — whilst still keeping your private API only accessible from within your second VPC, and being able to access external APIs too — both public API Gateways and third party APIs too)
Deploying the solution! 👨💻
🛑 Note: Running the following commands will incur charges on your AWS accounts.
Let’s deploy the basic code example which you can clone here: https://github.com/leegilmorecode/serverless-private-apis
💡 Note for my example the deploy NPM scripts are using different AWS profiles as we are deploying to two separate AWS accounts i.e. that is why they are not two separate CDK stacks in the same app.
- run the following command in both the order-service and stock-service folders:
npm i
- Change directory into the orders service folder and run
npm run deploy
- Once deployed make a note of the
VPC Endpoint ID
which is output in the terminal. - In the file
serverless-private-apis/stock-service/lib/stock-service-stack.ts
update line 68 to be the correct VPC Endpoint ID. - Now perform step two but in the stock-service folder this time.
- Take the Stock API which is output in the deploy outputs and add it to the file
orders-service/orders/create-order/create-order.ts
on line 7 and redeploy again.
💡 Please note this is the minimal code and architecture to allow us to discuss key points in the article, so this is not production ready and does not adhere to coding best practices. (For example no authentication on end points). I have also tried not to split out the code too much so example files below are easy to view with all dependencies in one file.
Testing the solution 🎯
Once you have deployed the solution you can use the postman file in serverless-private-apis/postman/serverless-private-apis.postman_collection.json
to try hitting the orders public endpoint, which in itself will call the Stock API privately on the AWS network:

Deeper dive into the architecture
The following diagram below shows how you reuse your VPC Endpoint for any calls through to multiple private APIs that you may have in your organisation i.e. you don’t need one VPC Endpoint for every Private API you wan’t to use.
The resource policy on the private API itself dictates which VPC Endpoints (or VPCs) can route traffic through to it, so it is fully secure. As you can see from the example below we are stating on the resource policy for the Payments Platform API that we are only going to allow traffic in from VPC endpoint vpce-0d9643ccd883bac3a.
We would also do the same for the Stock platform resource policy too, and require only one VPC Endpoint in the Orders private subnets.
Note you can also add a VPC Endpoint Policy to further secure what traffic can route through to your VPC Endpoint too, therefore securing at both ends.

Summary
I hope you found that useful! In Part 2 of this article I am going to show how you can make this easier for your consumers to use your private APIs by using a workaround with custom domain names. Let’s face it, its not great for consumers of your internal APIs to need to use VPC Endpoints and all of these workarounds.
In Part 2 of this article we cover how we can use Custom Domains with internal Private APIs.
Go and subscribe to my Enterprise Serverless Newsletter here for more of the same content:
Wrapping up 👋
I hope you found that useful!
Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:
https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore
If you found the articles inspiring or useful please feel free to support me with a virtual coffee https://www.buymeacoffee.com/leegilmore and either way lets connect and chat! ☕️
If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋
Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)
This article is sponsored by Sedai.io

About me
“Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Principal Software Engineer based in the UK; currently working as a Technical Cloud Architect and Principal Serverless Developer, having worked primarily in full-stack JavaScript on AWS for the past 5 years.
I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.”
*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***
You may also be interested in the following: