Saturday, 11 April 2020

AWS HttpApi with Cognito as JWT Authorizer

With the recent release of HttpApi from AWS I've been playing with it for a bit and I wanted to see how far I can get it to use authorization without handling any logic in my application.

Creating a base code

Started with a simple base, let's set up the initial scenario which is no authentication at all. The architecture is the typical HttpApi -> Lambda, in this case, the Lambda content is irrelevant and therefore I've just used an inline code to test it's working.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  Sample SAM Template using HTTP API and Cognito Authorizer

  # Dummy Lambda function 
    Type: AWS::Serverless::Function
      InlineCode: |
        exports.handler = function(event, context, callback) {
          const response = {
            test: 'Hello HttpApi',
            claims: event.requestContext.authorizer && 
          callback(null, response);
      Handler: index.handler
      Runtime: nodejs12.x
      Timeout: 30
      MemorySize: 256
          Type: HttpApi
            Path: /test
            Method: GET
            ApiId: !Ref HttpApi
              Authorizer: NONE

    Type: AWS::Serverless::HttpApi
          - "*"

    Description: URL of your API endpoint
    Value: !Sub 'https://${HttpApi}.execute-api.${AWS::Region}.${AWS::URLSuffix}/'
    Description: Api id of HttpApi
    Value: !Ref HttpApi

With the outputs of this template we can hit the endpoint and see that in fact, it's accessible. So far, nothing crazy here.

$ curl
{"test":"Hello HttpApi"}

Creating a Cognito UserPool and Client

The claims object is not populated because the request wasn't authenticated since no token was provided, as expected.

Let's create the Cognito UserPool with a very simple configuration assuming lots of default values since they're not relevant for this example.

  ## Add this fragment under Resources:

  # User pool - simple configuration 
    Type: AWS::Cognito::UserPool
        AllowAdminCreateUserOnly: false
        - email
      MfaConfiguration: "OFF"
        - AttributeDataType: String
          Mutable: true
          Name: name
          Required: true
        - AttributeDataType: String
          Mutable: true
          Name: email
          Required: true
        - email
  # User Pool client
    Type: AWS::Cognito::UserPoolClient
      ClientName: AspNetAppLambdaClient
      GenerateSecret: false
      PreventUserExistenceErrors: ENABLED
      RefreshTokenValidity: 30
        - COGNITO
      UserPoolId: !Ref UserPool

  ## Add this fragment under Outputs:

    Description: UserPool ID
    Value: !Ref UserPool
    Description: UserPoolClient ID
    Value: !Ref UserPoolClient

Once we have the cognito UserPool and a client, we are in a position to start putting things together. But before there are a few things to clarify:

  • Username is the email and only two fields are required to create a user: name and email.
  • The client defines both ALLOW_USER_PASSWORD_AUTH and ALLOW_USER_SRP_AUTH Auth flows to be used by different client code.
  • No secret is generated for this client, if you intend to use other flows, you'll need to create other clients accordingly.

Adding authorization information

Next step is to add authorization information to the HttpApi.

  ## Replace the HttpApi resource with this one.

    Type: AWS::Serverless::HttpApi
          - "*"
            IdentitySource: $request.header.Authorization
                - !Ref UserPoolClient
              issuer: !Sub https://cognito-idp.${AWS::Region}${UserPool}
        DefaultAuthorizer: OpenIdAuthorizer

We've added authorization information to the HttpApi where JWT issuer is the the Cognito UserPool previously created and the token are intended only for that client.

If we test again, nothing changes because the event associated with the lambda function says explictly "Authorizer: NONE".

To test this, we'll create a new event associated with the same lambda function but this time we'll add some authorization information to it.

        ## Add this fragment at the same level as GetOpen
        ## under Events as part of the function properties

          Type: HttpApi
            ApiId: !Ref HttpApi
            Method: GET
            Path: /secure
              Authorizer: OpenIdAuthorizer

If we test the new endpoint /secure, then we'll see the difference.

$ curl -v

>>>>>> removed for brevity >>>>>>
> GET /secure HTTP/1.1
> Host:
> User-Agent: curl/7.52.1
> Accept: */*
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 401 
< date: Sat, 11 Apr 2020 17:19:50 GMT
< content-length: 26
< www-authenticate: Bearer
< apigw-requestid: K1RYliB1IAMESNA=
* Curl_http_done: called premature == 0
* Connection #0 to host left intact


At this point we have a new endpoint that requires an access token. Now we need a token, but to get a token, we need a user first.

Fortunately Cognito can provide us with all we need in this case. Let's see how.

Creating a Cognito User

Cognito cli provides the commands to sign up and verify user accounts.

$ aws cognito-idp sign-up \
  --client-id asdfsdfgsdfgsdfgfghsdf \
  --username \
  --password Test.1234 \
  --user-attributes Name="email",Value="" Name="name",Value="Abel Perez" \
  --profile default \
  --region us-east-1

    "UserConfirmed": false, 
    "UserSub": "aaa30358-3c09-44ad-a2ec-5f7fca7yyy16", 
    "CodeDeliveryDetails": {
        "AttributeName": "email", 
        "Destination": "a***@e***.com", 
        "DeliveryMedium": "EMAIL"

After creating the user, it needs to be verified.

$ aws cognito-idp admin-confirm-sign-up \
  --user-pool-id us-east-qewretry \
  --username \
  --profile default \
  --region us-east-1

This commands gives no output, to test we are good to go, let's use the admin-get-user command

$ aws cognito-idp admin-get-user \
  --user-pool-id us-east-qewretry \
  --username \
  --profile default \
  --region us-east-1 \
  --query UserStatus


We have a confirmed user!

Getting a token for the Cognito User

To obtain an Access Token, we use the Cognito initiate-auth command providing the client, username and password.

$ TOKEN=`aws cognito-idp initiate-auth \
  --client-id asdfsdfgsdfgsdfgfghsdf \
  --auth-flow USER_PASSWORD_AUTH \
  --auth-parameters,PASSWORD=Test.1234 \
  --profile default \
  --region us-east-1 \
  --query AuthenticationResult.AccessToken \
  --output text`

$ echo $TOKEN

With the access token in hand, it's time to test the endpoint with it.

$ curl -H "Authorization:Bearer $TOKEN"
# some formatting added here
    "test": "Hello HttpApi",
    "claims": {
        "auth_time": "1586627310",
        "client_id": "asdfsdfgsdfgsdfgfghsdf",
        "event_id": "94872b9d-e5cc-42f2-8e8f-1f8ad5c6e1fd",
        "exp": "1586630910",
        "iat": "1586627310",
        "iss": "",
        "jti": "878b2acd-ddbd-4e68-b097-acf834291d09",
        "sub": "cce30358-3c09-44ad-a2ec-5f7fca7dbd16",
        "token_use": "access",
        "username": "cce30358-3c09-44ad-a2ec-5f7fca7dbd16"

Voilà! We've accessed the secure endpoint with a valid access token.

What about groups ?

I wanted to know more about possible granular control of the authorization and I went and created two Cognito Groups let's say Group1 and Group2. Then, I added my newly created user to both groups and repeated the experiment.

Once the user was added to the groups, I got a new token and issued the request to the secure endpoint.

$ curl -H "Authorization:Bearer $TOKEN"
# some formatting added here
    "test": "Hello HttpApi",
    "claims": {
        "auth_time": "1586627951",
        "client_id": "2p9k1pfhtsbr17a2fukr5mqiiq",
        "cognito:groups": "[Group2 Group1]",
        "event_id": "c450ae9e-bd4e-4882-b085-5e44f8b4cefd",
        "exp": "1586631551",
        "iat": "1586627951",
        "iss": "",
        "jti": "51a39fd9-98f9-4359-9214-000ea40b664e",
        "sub": "cce30358-3c09-44ad-a2ec-5f7fca7dbd16",
        "token_use": "access",
        "username": "cce30358-3c09-44ad-a2ec-5f7fca7dbd16"

Notice within the claims object, a new one has come up: "cognito:groups" and the value associated with it is "[Group2 Group1]".

Which means that we could potentially check this claim value to make some decisions in our application logic without having to handle all of the authentication inside the application code base.

This opens the possibility for more exploration within the AWS ecosystem. I hope this has been helpful, the full source code can be found at

Monday, 23 March 2020

AWS Serverless Web Application Architecture

Recently, I've been exploring ideas about how to put together different AWS services to achieve a totally serverless architecture for web applications. One of the new services is the HTTP API which simplifies the integration with Lambda.

One general principle I want to follow when designing these architectural models is the separation of three main subsystems:

  • Identity service to handle authentication and authorization
  • Static assets traffic being segregated
  • Dynamic page rendering and server side logic
  • Application configuration outside the code

All these component will be publicly accessible via Route 53 DNS record sets pointing to the relevant endpoints.

Other general ideas across all design diagrams below are:

  • Cognito will handle authentication and authorization
  • S3 will store all static assets
  • Lambda will execute server side logic
  • SSM Parameter Store will hold all configuration settings

Architecture variant 1 - HTTP API and CDN publicly exposed

In this first approach, we have the S3 bucket behind CloudFront which is a common pattern when creating CDN-like structures. CloudFront takes care of all the caching behaviours as well as distributing the cached versions all over the Edge locations, so subsequent requests will be dispatched at a reduced latency. Also, CloudFront has only one cache behaviour, which is the default and one origin which is the S3 bucket.

It's also important to notice the CloudFront distribution has an Alternative Domain Name set to the relevant record set e.g. and let's not forget about referencing the ACM SSL certificate so we can use the custom url and not the random one from CloudFront.

From the HTTP API perspective, it has only one integration which is a Lambda integration on the $default route, which means all requests coming from the HTTP endpoint will be directed to the Lambda function in question.

Similar to the case of CloudFront, the HTTP API requires a Custom Domain and Certificate to be able to use a custom url as opposed to the random one given by the API service on creation.

Architecture variant 2 - HTTP API behind CloudFront

In this second approach, we still have the S3 bucket behind CloudFront following the same pattern. However we've placed the HTTP API also behind CloudFront.

CloudFront becomes the traffic controller in this case, where several cache behaviours can be defined to make the correct decision where to route the request to.

Both record sets (media and webapp) are pointing to the same CloudFront distribution, it's the application logic's responsibility to request all static assets using the appropriated domain nam.

Since the HTTP API is behind a CF distribution, I'd suggest to set it up as Regional endpoint.

Architecture variant 3 - No CloudFront at all

Continue playing with this idea, what if we don't use CloudFront distribution at all? I gave it a go and it turns out that it's possible to achieve similar results.

We can use two HTTP APIs and set one to forward traffic to S3 for static assets and the other one to Lambda as per the usual pattern, each of those with Custom Domain and that solves the problem.

But I wanted to push it a little bit further, this time I tried with only one HTTP API and setting several routes e.g "/css/*", "/js/*" integrates with S3 and any other integrates with Lambda, it's then, application logic's responsibility to request all static assets using the appropriated url


These are some ideas I've been experimenting with, the choice of including or not a CloudFront distribution is dependent on the concrete use case, whether the source of our requests is local or globally diverse. Also, whether it is more suitable to have static assets under a subdomain or virtual directory under the same host name.

Never underestimate the power and flexibility of an API Gateway, especially the new HTTP API where it can front any number of combination of resources in the back end.

Saturday, 13 July 2019

Run ASP.NET Core 2.2 on a Raspberry Pi Zero

Raspberry PI Zero (and Zero W) is a cool and cheap piece of technology that can run software anywhere. Being a primarily .NET developer, I wanted to give it a try running the current version of ASP.NET Core, which is as of now 2.2.

However, the first problem is that even though .NET Core supports ARM CPUs, it does not support ARM32v6, only v7 and above. After digging a bit, I found out that mono does support that CPU and on top of that, it’s binary compatible with .NET Framework 4.7.

In this post, I’ll summarise several hours of trial and error to get it working. If developing on Windows, targeting both netcoreapp2.2 and net472 is easier since chances are we’ll have all installed already. On Linux, however, it’s not that easy and it’s when Mono comes in to help, we need the reference assemblies to build the net472 version

Let’s check the tools we’ll use:

$ dotnet --version
$ docker --version
Docker version 18.09.0, build 4d60db4

Create a new dotnet core MVC web application to get the starting template.

$ dotnet new mvc -o dotnet.mvc

Update the target framework to TargetFrameworks if we want to actually target both. We could target only net472 if desired. Also, since Microsoft.AspNetCore.App and Microsoft.AspNetCore.Razor.Design metapackages won’t be available on .NET Framework, instead reference directly the Nuget packages we’ll use, here is an example of the ones the default template uses.

<Project Sdk="Microsoft.NET.Sdk.Web">


    <PackageReference Include="Microsoft.AspNetCore" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Hosting.WindowsServices" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.HttpsPolicy" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.CookiePolicy" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.EventLog" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Options" Version="2.2.0" />


When targeting net472, we need to use mono’s reference assemblies since they’re not part of dotnet core, to do that, we set the environment variable FrameworkPathOverride to the appropriated path, typically something like /usr/lib/mono/4.7.2-api/.

export FrameworkPathOverride=/usr/lib/mono/4.7.2-api/

Then proceed as usual with dotnet cli

  • dotnet restore
  • dotnet build
  • dotnet run

To make a more consistent build across platforms. We’ll create a Dockerfile, this will also enable us to build locally without having to install mono on our local development environments. This is a multi-stage docker file that builds the application and creates the runtime image.

More information on how to create the docker file for dotnet core applications.

FROM pomma89/dotnet-mono:dotnet-2-mono-5-sdk AS build-env

ENV FrameworkPathOverride /usr/lib/mono/4.7.2-api/

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out -f net472

# Build runtime image for ARM v5 using mono
FROM arm32v5/mono:5.20
COPY --from=build-env /app/out .
ENTRYPOINT [ "mono", "dotnet.mvc.exe" ]

Since we are building with mono, the resulting executable is in this case dotnet.mvc.exe file. The environment variable ASPNETCORE_URLS is required in order to be able to listen on any network interface, by default it will only listen on localhost which is in fact, the container itself and not our host. Combined with EXPOSE 5000 it’s possible to access the application from outside the container.

The secret ingredient to make it work on a raspberry pi zero (ARM32v6) is the line “FROM arm32v5/mono:5.20” which takes a base docker image using a compatible CPU architecture.

To run this application locally we use the traditional dotnet run but since we’ve specified two target frameworks, the parameter -f is required to specify which one we want to use. In this case, chances are we’re using netcoreapp2.2 locally and net472 to build and run on the raspberry pi.

$ dotnet run -f netcoreapp2.2

Once we’re happy the application works as expected locally, we could build the docker image locally to test the whole docker building process is working before deploying to the device. To test this way, we have to modify the second FROM line and remove the “arm32v5/” from the image name, taking the PC version of mono instead.

The following sequence shows the docker commands.

Build the image locally

$ docker build -t test .

Either run in interactive mode -i -t or run in detached mode -d

$ docker run --rm -it -p 3000:5000 test
$ docker run --rm -d -p 3000:5000 test

Once it’s running, we can just browse to http://localhost:3000/ and verify it’s up and running.

Now we can go back and modify the second FROM line by putting “arm32v5/” where it was.

# build the image and tag it to my docker hub registry 
$ docker build -t abelperezok/mono-mvc:arm32v5-test .
# don’t forget to log in to docker hub
$ docker login
# push this image to docker hub registry 
$ docker push abelperezok/mono-mvc:arm32v5-test

Once it’s uploaded to the registry, we can connect to the raspberry pi and run the follow docker run command.

# pull and run the container from this image 
$ docker run --rm -d -p 3000:5000 abelperezok/mono-mvc:arm32v5-test

Helpful links

Saturday, 2 March 2019

How to create a dynamic DNS with AWS Route 53

Have you ever wanted to host some pet project or even a small website on your own domestic network? If so, you must have stumbled across the DNS resolution issue: since we depend on our ISP to get our "real IP address", there's no guarantee that the IP we see today will stay any longer the same.

You could update your domain's DNS records every time you detect the IP has changed, but obviously that's tedious and error prone. That's when dynamic DNS (DDNS) comes in. Services like No-IP,Duck DNS,etc can be helpful.

In this post I'll go through the steps involved when it comes to setting up your own DDNS service when you own a domain that has been registered in AWS Route 53.


Before running the commands I suggest here, we need a couple of things to set up. Let's start by installing the required packages if you haven't already installed them.

  • sudo apt-get install awscli
  • sudo apt-get install jq

Configure AWS credentials

$ aws configure
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: eu-west-1
Default output format [None]: json

A hosted zone in AWS Route 53

$ aws route53 list-hosted-zones
    "HostedZones": [
            "ResourceRecordSetCount": 5, 
            "CallerReference": " ... ", 
            "Config": {
                "Comment": "HostedZone created by Route53 Registrar", 
                "PrivateZone": false
            "Id": "/hostedzone/Z2TLEXAMPLEZONE", 
            "Name": ""

What's the plan?

The scenario I'm covering here is probably one of the most common. I want to create a new subdomain that points to my external IP and update that record as the external IP changes. To achieve that, we'll follow the steps:

  • Find out the external IP.
  • Get the desired hosted zone.
  • Create the A record (or update it if it already exists).
  • Set it up to run regularly (cron job).

Script step by step

Lest's start like any script, the shebang indicator. Then, input variables, in this case we define the domain name and the public record, notice the "." at the end of the public record, this is required by route 53.


Find the external IP, there are many ways to obtain this value, but since we're using AWS, let's get it from checkip endpoint. For debugging purposes we echo the IP found.

IP=$(curl -s
echo Found IP=$IP

Determine the hosted zone id, this step is optional if you want to use a hard-coded zone id that can be copied from the AWS Route 53 console. In this case I've invoked list-zones-by-name command which gives the hosted zone information for a specific domain, the format is as above in the prerequisites section. To extract the exact id I used a combination of jq and sed commands.

R53_HOSTED_ZONE=`aws route53 list-hosted-zones-by-name \
--dns-name $DOMAIN_NAME \
--query HostedZones \
| jq -r ".[] | select(.Name == \"$DOMAIN_NAME.\").Id" \
| sed 's/\/hostedzone\///'`

Now that we have all the required information, let's prepare the A record JSON input that will be used by aws route53 change-resource-record-sets command. the action "UPSERT" creates or updates the record accordingly, otherwise we'd need to manually check if it exists or not before updating it. Again for debugging purposes, we echo the final JSON.

read -r -d '' R53_ARECORD_JSON << EOM
  "Changes": [
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "$PUBLIC_RECORD",
        "Type": "A",
        "TTL": 300,
        "ResourceRecords": [
            "Value": "$IP"

echo About to execute change
echo "$R53_ARECORD_JSON" 

With the input ready, we invoke the Route 53 command to create/update the A record. This command will return immediately but the operation will take a few seconds to complete. If we want to make sure we know when it's completed, then we need to get the change id returned by the command, in this case, it's stored in the variable. This is optional, as well as the next step.

R53_ARECORD_ID=`aws route53 change-resource-record-sets \
--hosted-zone-id $R53_HOSTED_ZONE \
--change-batch "$R53_ARECORD_JSON" \
--query ChangeInfo.Id \
--output text`

echo Waiting for the change to update.

At this point, the request to create/update the A record is in progress, we could finish the script right now. However, I'd like to get a final confirmation that the operation has been completed. To do that, we can use the wait command providing the change id from the previous request.

aws route53 wait resource-record-sets-changed --id $R53_ARECORD_ID

echo Done.

And now it's actually completed. You should save this to a file, maybe name it It will need execute permission.

$ chmod u+x ./

Set up cron job

In this particular instance I want this script to run in one of my raspberry pis, so I proceeded to copy the script file to pi user's home directory (/home/pi/)

pi@raspberrypi:~ $ ls

Now we'll set up a user cron job, we do that by running the command crontab -u followed by the user under which the job should run, this job doesn't require any system-wide privilege therefore it can run as the regular user, pi. -e to edit the file.

pi@raspberrypi:~ $ crontab -u pi -e

All we need to do is append the following text to the file content you are prompted with. The first two numbers correspond to minute and hour to run. For testing purposes I set it near the current time at the moment of testing it. The script output is appended/redirected to a text file so we can review afterwards if desired.

23 22 * * * /home/pi/ >> /home/pi/cron-update-dns.txt

See it in action

Once we've saved the crontab file, if it's the time where the cron job is about to start, we can test it by running tail -f command

pi@raspberrypi:~ $ tail -f cron-update-dns.txt

Finally, don't forget to update the port forwarding section on your home router so your open port directs traffic to a specific device, in my case, to that particular raspberry pi.

Sunday, 17 February 2019

How to Install docker on raspbian stretch

As part of one of my recent experiments I wanted to install docker on my raspberry pi. The procedure nowadays seems significantly better than a couple of years ago when I first tried. However, there are always some little details that can be a bit frustrating when following steps.

Although the process to install docker is well described in their website, it's focused on Debian distribution. We all know that Raspbian is based on Debian and therefore most of the instructions for will apply without much trouble.

That said, I will detail the exact steps I followed to get docker installed on my raspberry pi, this procedure was tested on 3B, Zero and Zero W.

First, update the package list, if you haven't done yet after setting up your raspberry pi.

$ sudo apt-get update

Install packages to allow apt to use a repository over HTTPS.

$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \

Add Docker’s official GPG key. Note that here the url points to raspbian directory and not to debian as per the original instructions.

curl -fsSL | sudo apt-key add -

The next step recommended didn't work for me.

sudo add-apt-repository \
   "deb [arch=armhf] \
   $(lsb_release -cs) \

It failed with an error similar to this:

Traceback (most recent call last):
File "/usr/bin/add-apt-repository", line 95, in 
  sp = SoftwareProperties(options=options)
File "/usr/lib/python3/dist-packages/softwareproperties/"
, line 109, in __init__
File "/usr/lib/python3/dist-packages/softwareproperties/"
, line 599, in reload_sourceslist
File "/usr/lib/python3/dist-packages/aptsources/", line 89, 
in get_sources (, self.codename))
aptsources.distro.NoDistroTemplateException: Error: could not find a 
distribution template for Raspbian/stretch

Instead, I tried this other way, more explicitly setting up the repository into a new docker.list sources file.

$echo "deb [arch=armhf] $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

With the new source repository in the list, update the package list again.

$ sudo apt-get update

Now we are in a position to install docker-ce from the repository, in this particular case, I ran into an issue with the latest version, documented in github basically they suggest to go back to 18.06.1 which we can get from the command below as per official docker instructions on how to get a specific version.

$ apt-cache madison docker-ce
docker-ce | 5:18.09.0~3-0~raspbian-stretch | stretch/stable armhf Packages
docker-ce | 18.06.2~ce~3-0~raspbian | stretch/stable armhf Packages
docker-ce | 18.06.1~ce~3-0~raspbian | stretch/stable armhf Packages
docker-ce | 18.06.0~ce~3-0~raspbian | stretch/stable armhf Packages

As of this writing, the version I tried is 18.06.2~ce~3-0~raspbian, which can be installed using the apt-get command.

$ sudo apt-get install docker-ce=18.06.2~ce~3-0~raspbian

Also notice that by using this vesion, there's no need to install docker-ce-cli.

Testing everything is running as expected.

$ sudo docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 18.06.2-ce
 --- more data --- 

Run the hello world. Usually we get the command sudo docker run hello-world but that only works on "normal" architectures, in this case, we're using ARMv6/7, but to make it more compatible with all of them, I ran the image from arm32v5/hello-world.

$ sudo docker run arm32v5/hello-world
Unable to find image 'arm32v5/hello-world:latest' locally
latest: Pulling from arm32v5/hello-world
590e13f69e4a: Pull complete 
Digest: sha256:8a6a26a494c03e91381161abe924a39a2ff72de13189edcc2ed1695e6be12a5f
Status: Downloaded newer image for arm32v5/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:

For more examples and ideas, visit:

And that's all, docker installed up and running on a raspberry pi.


If you'd like to get rid of the sudo command every time you use docker, then you can run the following command found in the official guide as a post-installation step. You don't need to create the group as it's already created by docker installation. It adds your user to the docker group.

$ sudo usermod -aG docker $USER

You need to log out and back in for this change to take effect. After that you'll be able to run all docker commands as a regular user.

Thursday, 7 February 2019

Querying Aurora serverless database remotely using Lambda - part 3

This post is part of a series

In the previous part, we've set up the Aurora MySql cluster. At this point we can start creating the client code to allow querying.

The Lambda code

In this example I'll be using .NET Core 2.1 as Lambda runtime and C# as programming language. The code is very simple and should be easy to port to your favourite runtime/language.

Lambda Input

The input to my function consists of two main pieces of information: database connection information and the query to execute.

    public class ConnectionInfo
        public string DbUser { get; set; }
        public string DbPassword { get; set; }
        public string DbName { get; set; }
        public string DbHost { get; set; }
        public int DbPort { get; set; }

    public class LambdaInput
        public ConnectionInfo Connection { get; set; }

        public string QueryText { get; set; }

Lambda Code

The function itself returns a List of dictionary where each item of the list represents a "record" from the query result, these are in a key/value form where key is the "field" name and the value is the what comes form the query.

    public List<Dictionary<string, object>> RunQueryHandler(LambdaInput input, ILambdaContext context)
        var cxnString = GetCxnString(input.Connection);
        var query = input.QueryText;

        var result = new List<Dictionary<string, object>>();
        using (var conn = new MySql.Data.MySqlClient.MySqlConnection(cxnString))
            var cmd = GetCommand(conn, query);
            var reader = cmd.ExecuteReader();

            var columns = new List<string>();

            for (int i = 0; i < reader.FieldCount; i++)

            while (reader.Read())
                var record = new Dictionary<string, object>();
                foreach (var column in columns)
                    record.Add(column, reader[column]);
        return result;

Support methods

Here is the code of the missing methods: GetCxnString and GetCommand not really complicated.

    private static readonly string cxnStringFormat = "server={0};uid={1};pwd={2};database={3};Connection Timeout=60";

    private string GetCxnString(ConnectionInfo cxn)
        return string.Format(cxnStringFormat, cxn.DbHost, cxn.DbUser, cxn.DbPassword, cxn.DbName);

    private static MySqlCommand GetCommand(MySqlConnection conn, string query)
        var cmd = conn.CreateCommand();
        cmd.CommandText = query;
        cmd.CommandType = CommandType.Text;
        return cmd;

Project file

Before compiling and packaging the code we need a project file, assuming you don't have one already, this is how it looks like to be able to run in AWS Lambda environment.

<Project Sdk="Microsoft.NET.Sdk">


    <PackageReference Include="Amazon.Lambda.Core" Version="1.0.0" />
    <PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.3.0" />
    <PackageReference Include="MySql.Data" Version="8.0.13" />
    <PackageReference Include="Newtonsoft.Json" Version="11.0.2" />

    <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="2.2.0" />

Preparing Lambda package

Assuming you have both the code and csproj file in the current directory, we just run dotnet lambda package command as per below, where -c sets the Configuration to release, -f sets the target framework to netcoreapp2.1 and -o sets the output zip file name.

$ dotnet lambda package -c release -f netcoreapp2.1 -o
Amazon Lambda Tools for .NET Core applications (2.2.0)
Project Home:,

Executing publish command
Deleted previous publish folder
... invoking 'dotnet publish', working folder '/home/abel/Downloads/aurora_cluster_sample/bin/release/netcoreapp2.1/publish'

( ... ) --- removed code for brevity ---

... zipping:   adding: aurora.lambda.deps.json (deflated 76%)
Created publish archive (/home/abel/Downloads/aurora_cluster_sample/
Lambda project successfully packaged: /home/abel/Downloads/aurora_cluster_sample/

Next, we upload the resulting zip file to an S3 bucket of our choice. In this example I'm using a bucket named abelperez-temp and I'm uploading the zip file to a folder named aurora-lambda so I keep some form of organisation in my file directory.

$ aws s3 cp s3://abelperez-temp/aurora-lambda/
upload: ./ to s3://abelperez-temp/aurora-lambda/

Lambda stack

To create the Lambda function, I've put together a CloudFormation template that includes:

  • AWS::EC2::SecurityGroup contains outbound traffic rule to allow port 3306
  • AWS::IAM::Role contains an IAM role to allow the Lambda function to write to CloudWatch Logs and interact with ENIs
  • AWS::Lambda::Function contains the function definition

Here is the full template, the required parameters are VpcId, SubnetIds and LambdaS3Bucket which we should get from previous stacks' outputs. The template outputs the function full name, which we'll need to be able to invoke it later.

Special attention to the Lambda function definition, the property Handler, in .NET runtime is in the form of AssemblyName::Namespace.ClassName::MethodName and the property Code containing the S3 location of the zip file we uploaded earlier.

Description: Template to create a lambda function 

    Type: String
    Type: Number
    Default: 3306
    Type: String
    Type: CommaDelimitedList

    Type: AWS::EC2::SecurityGroup
      GroupDescription: Allow outbound traffic to MySQL host
        Ref: VpcId
        - IpProtocol: tcp
          FromPort: !Ref DbClusterPort
          ToPort: !Ref DbClusterPort

    Type: AWS::IAM::Role
        Version: 2012-10-17
          - Effect: Allow
            Action: sts:AssumeRole
      Path: /
        - PolicyName: PermitLambda
            Version: 2012-10-17
            - Effect: Allow
              - logs:CreateLogGroup
              - logs:CreateLogStream
              - logs:PutLogEvents
              - ec2:CreateNetworkInterface
              - ec2:DescribeNetworkInterfaces
              - ec2:DeleteNetworkInterface
                - "arn:aws:logs:*:*:*"
                - "*"
    Type: AWS::Lambda::Function
      Handler: aurora.lambda::project.lambda.Function::RunQueryHandler
      Role: !GetAtt AWSLambdaExecutionRole.Arn
        S3Bucket: !Ref LambdaS3Bucket
        S3Key: aurora-lambda/
      Runtime: dotnetcore2.1
      Timeout: 30
          - !Ref LambdaSg
        SubnetIds: !Ref SubnetIds

    Value: !Ref HelloLambda

To deploy this stack we use the following command where we pass the parameters specific to our VPC (VpcId and SubnetIds) as well as the S3 bucket name.

$ aws cloudformation deploy --stack-name fn-stack \
--template-file aurora_lambda_template.yml \
--parameter-overrides VpcId=vpc-0b442e5d98841996c SubnetIds=subnet-013d0bbb3eca284a2,subnet-00c67cfed3ab0a791 LambdaS3Bucket=abelperez-temp \
--capabilities CAPABILITY_IAM

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - fn-stack

Let's get the outputs as we'll need this information later. We have the Lambda function full name.

$ aws cloudformation describe-stacks --stack-name fn-stack --query Stacks[*].Outputs
            "OutputKey": "LambdaFunction",
            "OutputValue": "fn-stack-HelloLambda-C32KDMYICP5W"

Invoking Lambda function

Now that we have deployed the function and we know its full name, we can invoke it by using dotnet lambda invoke-function command. Part of this job is to prepare the payload which is a JSON in put corresponding to the Lambda input defined above.

    "Connection": {
        "DbUser": "master", 
        "DbPassword": "Aurora.2019", 
        "DbName": "dbtest", 
        "DbHost": "", 
        "DbPort": 3306
    "QueryText":"show databases;"

Here is the command to invoke the Lambda function, including the payload parameter encoded to escape the quotes and all in a single line. There are better ways to do this, but for the sake of this demonstration, it's good enough.

$ dotnet lambda invoke-function \
--function-name fn-stack-HelloLambda-C32KDMYICP5W \
--payload "{ \"Connection\": {\"DbUser\": \"master\", \"DbPassword\": \"Aurora.2019\", \"DbName\": \"dbtest\", \"DbHost\": \"\", \"DbPort\": 3306}, \"QueryText\":\"show databases;\" }" \
--region eu-west-1

Amazon Lambda Tools for .NET Core applications (2.2.0)
Project Home:,


Log Tail:
START RequestId: 595944b5-73bb-4536-be92-a42652125ba8 Version: $LATEST
END RequestId: 595944b5-73bb-4536-be92-a42652125ba8
REPORT RequestId: 595944b5-73bb-4536-be92-a42652125ba8  Duration: 11188.62 ms   Billed Duration: 11200 ms       Memory Size: 128 MB     Max Memory Used: 37 MB

Now we can see the output in the Payload section. And that's how we can query remotely any Aurora serverless cluster without having to set up any EC2 instance. This could be extended to handle different SQL operations such as Create, Insert, Delete, etc.

Monday, 4 February 2019

Querying Aurora serverless database remotely using Lambda - part 2

This post is part of a series

In the previous part, we've set up the base layer to deploy our resources. At this point we can create the database cluster.

Aurora DB Cluster

Assuming we have our VPC ready with at least two subnets to comply with high availability best practices, let's create our cluster, I've put together a CloudFormation template that includes:

  • AWS::EC2::SecurityGroup contains inbound traffic rule to allow port 3306
  • AWS::RDS::DBSubnetGroup contains a group of subnets to deploy the cluster
  • AWS::EC2::DBCluster contains all the parameters to create the database cluster

Here is the full template, the only required parameters are VpcId and SubnetIds, but feel free to override any of the database cluster parameters such as database name, user name, password, etc. The template outputs the IDs corresponding to newly created resources such as the database cluster DNS endpoint, port and the security group.

Description: Template to create a serverless aurora mysql cluster

    Type: String
    Default: dbtest
    Type: String
    Default: serverless-mysql-aurora
    Type: String
    Default: default.aurora5.6
    Type: String
    Default: master
    Type: String
    Default: Aurora.2019
    Type: Number
    Default: 3306
    Type: String
    Type: CommaDelimitedList

    Type: AWS::EC2::SecurityGroup
      GroupDescription: Allow MySQL port to client host
        Ref: VpcId
        - IpProtocol: tcp
          FromPort: !Ref DbClusterPort
          ToPort: !Ref DbClusterPort

    Type: "AWS::RDS::DBSubnetGroup"
      DBSubnetGroupDescription: "aurora subnets"
      SubnetIds: !Ref SubnetIds

    Type: AWS::RDS::DBCluster
        Ref: DbClusterDatabaseName
        Ref: DbClusterParameterGroup
        Ref: DbSubnetGroup
      Engine: aurora
      EngineMode: serverless
        Ref: DbClusterMasterUsername
        Ref: DbClusterMasterPassword
        AutoPause: true
        MinCapacity: 2
        MaxCapacity: 4
        SecondsUntilAutoPause: 1800
        - !Ref DbClusterSg
    Value: !GetAtt AuroraMysqlCluster.Endpoint.Address
    Value: !GetAtt AuroraMysqlCluster.Endpoint.Port
    Value: !Ref DbClusterSg

To deploy this stack we use the following command where we pass the parameters specific to our VPC (VpcId and SubnetIds).

$ aws cloudformation deploy --stack-name db-stack \
--template-file aurora_cluster_template.yml \
--parameter-overrides VpcId=vpc-0b442e5d98841996c SubnetIds=subnet-013d0bbb3eca284a2,subnet-00c67cfed3ab0a791

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - db-stack

Let's get the outputs as we'll need this information later. We have the cluster endpoint DNS name and the port as per our definition.

$ aws cloudformation describe-stacks --stack-name db-stack --query Stacks[*].Outputs
            "OutputKey": "DbClusterEndpointAddress",
            "OutputValue": ""
            "OutputKey": "DbClusterSgId",
            "OutputValue": "sg-072bbf2078caa0f46"
            "OutputKey": "DbClusterEndpointPort",
            "OutputValue": "3306"

In the next part, we'll create the Lambda function to query this database remotely.