Monday, 23 March 2020

AWS Serverless Web Application Architecture

Recently, I've been exploring ideas about how to put together different AWS services to achieve a totally serverless architecture for web applications. One of the new services is the HTTP API which simplifies the integration with Lambda.

One general principle I want to follow when designing these architectural models is the separation of three main subsystems:

  • Identity service to handle authentication and authorization
  • Static assets traffic being segregated
  • Dynamic page rendering and server side logic
  • Application configuration outside the code

All these component will be publicly accessible via Route 53 DNS record sets pointing to the relevant endpoints.

Other general ideas across all design diagrams below are:

  • Cognito will handle authentication and authorization
  • S3 will store all static assets
  • Lambda will execute server side logic
  • SSM Parameter Store will hold all configuration settings

Architecture variant 1 - HTTP API and CDN publicly exposed

In this first approach, we have the S3 bucket behind CloudFront which is a common pattern when creating CDN-like structures. CloudFront takes care of all the caching behaviours as well as distributing the cached versions all over the Edge locations, so subsequent requests will be dispatched at a reduced latency. Also, CloudFront has only one cache behaviour, which is the default and one origin which is the S3 bucket.

It's also important to notice the CloudFront distribution has an Alternative Domain Name set to the relevant record set e.g. media.example.com and let's not forget about referencing the ACM SSL certificate so we can use the custom url and not the random one from CloudFront.

From the HTTP API perspective, it has only one integration which is a Lambda integration on the $default route, which means all requests coming from the HTTP endpoint will be directed to the Lambda function in question.

Similar to the case of CloudFront, the HTTP API requires a Custom Domain and Certificate to be able to use a custom url as opposed to the random one given by the API service on creation.

Architecture variant 2 - HTTP API behind CloudFront

In this second approach, we still have the S3 bucket behind CloudFront following the same pattern. However we've placed the HTTP API also behind CloudFront.

CloudFront becomes the traffic controller in this case, where several cache behaviours can be defined to make the correct decision where to route the request to.

Both record sets (media and webapp) are pointing to the same CloudFront distribution, it's the application logic's responsibility to request all static assets using the appropriated domain nam.

Since the HTTP API is behind a CF distribution, I'd suggest to set it up as Regional endpoint.

Architecture variant 3 - No CloudFront at all

Continue playing with this idea, what if we don't use CloudFront distribution at all? I gave it a go and it turns out that it's possible to achieve similar results.

We can use two HTTP APIs and set one to forward traffic to S3 for static assets and the other one to Lambda as per the usual pattern, each of those with Custom Domain and that solves the problem.

But I wanted to push it a little bit further, this time I tried with only one HTTP API and setting several routes e.g "/css/*", "/js/*" integrates with S3 and any other integrates with Lambda, it's then, application logic's responsibility to request all static assets using the appropriated url

Conclusion

These are some ideas I've been experimenting with, the choice of including or not a CloudFront distribution is dependent on the concrete use case, whether the source of our requests is local or globally diverse. Also, whether it is more suitable to have static assets under a subdomain or virtual directory under the same host name.

Never underestimate the power and flexibility of an API Gateway, especially the new HTTP API where it can front any number of combination of resources in the back end.

No comments:

Post a Comment