Saturday 13 July 2019

Run ASP.NET Core 2.2 on a Raspberry Pi Zero

Raspberry PI Zero (and Zero W) is a cool and cheap piece of technology that can run software anywhere. Being a primarily .NET developer, I wanted to give it a try running the current version of ASP.NET Core, which is as of now 2.2.

However, the first problem is that even though .NET Core supports ARM CPUs, it does not support ARM32v6, only v7 and above. After digging a bit, I found out that mono does support that CPU and on top of that, it’s binary compatible with .NET Framework 4.7.

In this post, I’ll summarise several hours of trial and error to get it working. If developing on Windows, targeting both netcoreapp2.2 and net472 is easier since chances are we’ll have all installed already. On Linux, however, it’s not that easy and it’s when Mono comes in to help, we need the reference assemblies to build the net472 version

Let’s check the tools we’ll use:

$ dotnet --version
2.2.202
$ docker --version
Docker version 18.09.0, build 4d60db4

Create a new dotnet core MVC web application to get the starting template.

$ dotnet new mvc -o dotnet.mvc

Update the target framework to TargetFrameworks if we want to actually target both. We could target only net472 if desired. Also, since Microsoft.AspNetCore.App and Microsoft.AspNetCore.Razor.Design metapackages won’t be available on .NET Framework, instead reference directly the Nuget packages we’ll use, here is an example of the ones the default template uses.

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFrameworks>net472;netcoreapp2.2</TargetFrameworks>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Hosting.WindowsServices" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.StaticFiles" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.HttpsPolicy" Version="2.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.CookiePolicy" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Logging.EventLog" Version="2.2.0" />
    <PackageReference Include="Microsoft.Extensions.Options" Version="2.2.0" />
  </ItemGroup>

</Project>

When targeting net472, we need to use mono’s reference assemblies since they’re not part of dotnet core, to do that, we set the environment variable FrameworkPathOverride to the appropriated path, typically something like /usr/lib/mono/4.7.2-api/.

export FrameworkPathOverride=/usr/lib/mono/4.7.2-api/

Then proceed as usual with dotnet cli

  • dotnet restore
  • dotnet build
  • dotnet run

To make a more consistent build across platforms. We’ll create a Dockerfile, this will also enable us to build locally without having to install mono on our local development environments. This is a multi-stage docker file that builds the application and creates the runtime image.

More information on how to create the docker file for dotnet core applications. https://docs.docker.com/engine/examples/dotnetcore/

FROM pomma89/dotnet-mono:dotnet-2-mono-5-sdk AS build-env
WORKDIR /app

ENV FrameworkPathOverride /usr/lib/mono/4.7.2-api/

# Copy csproj and restore as distinct layers
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out -f net472

# Build runtime image for ARM v5 using mono
FROM arm32v5/mono:5.20
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 5000
ENV ASPNETCORE_URLS http://*:5000
ENTRYPOINT [ "mono", "dotnet.mvc.exe" ]

Since we are building with mono, the resulting executable is in this case dotnet.mvc.exe file. The environment variable ASPNETCORE_URLS is required in order to be able to listen on any network interface, by default it will only listen on localhost which is in fact, the container itself and not our host. Combined with EXPOSE 5000 it’s possible to access the application from outside the container.

The secret ingredient to make it work on a raspberry pi zero (ARM32v6) is the line “FROM arm32v5/mono:5.20” which takes a base docker image using a compatible CPU architecture.

To run this application locally we use the traditional dotnet run but since we’ve specified two target frameworks, the parameter -f is required to specify which one we want to use. In this case, chances are we’re using netcoreapp2.2 locally and net472 to build and run on the raspberry pi.

$ dotnet run -f netcoreapp2.2

Once we’re happy the application works as expected locally, we could build the docker image locally to test the whole docker building process is working before deploying to the device. To test this way, we have to modify the second FROM line and remove the “arm32v5/” from the image name, taking the PC version of mono instead.

The following sequence shows the docker commands.

Build the image locally

$ docker build -t test .

Either run in interactive mode -i -t or run in detached mode -d

$ docker run --rm -it -p 3000:5000 test
$ docker run --rm -d -p 3000:5000 test

Once it’s running, we can just browse to http://localhost:3000/ and verify it’s up and running.

Now we can go back and modify the second FROM line by putting “arm32v5/” where it was.

# build the image and tag it to my docker hub registry 
$ docker build -t abelperezok/mono-mvc:arm32v5-test .
# don’t forget to log in to docker hub
$ docker login
# push this image to docker hub registry 
$ docker push abelperezok/mono-mvc:arm32v5-test

Once it’s uploaded to the registry, we can connect to the raspberry pi and run the follow docker run command.

# pull and run the container from this image 
$ docker run --rm -d -p 3000:5000 abelperezok/mono-mvc:arm32v5-test

Helpful links

https://www.c-sharpcorner.com/article/running-asp-net-core-2-0-via-mono/

https://stackoverflow.com/questions/44770702/build-nuget-package-on-linux-that-targets-net-framework

https://www.mono-project.com/download/stable/#download-lin-debian

https://andrewlock.net/building-net-framework-asp-net-core-apps-on-linux-using-mono-and-the-net-cli/

https://hub.docker.com/r/pomma89/dotnet-mono/

Saturday 2 March 2019

How to create a dynamic DNS with AWS Route 53

Have you ever wanted to host some pet project or even a small website on your own domestic network? If so, you must have stumbled across the DNS resolution issue: since we depend on our ISP to get our "real IP address", there's no guarantee that the IP we see today will stay any longer the same.

You could update your domain's DNS records every time you detect the IP has changed, but obviously that's tedious and error prone. That's when dynamic DNS (DDNS) comes in. Services like No-IP,Duck DNS,etc can be helpful.

In this post I'll go through the steps involved when it comes to setting up your own DDNS service when you own a domain that has been registered in AWS Route 53.

Prerequisites

Before running the commands I suggest here, we need a couple of things to set up. Let's start by installing the required packages if you haven't already installed them.

  • sudo apt-get install awscli
  • sudo apt-get install jq

Configure AWS credentials

$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: eu-west-1
Default output format [None]: json

A hosted zone in AWS Route 53

$ aws route53 list-hosted-zones
{
    "HostedZones": [
        {
            "ResourceRecordSetCount": 5, 
            "CallerReference": " ... ", 
            "Config": {
                "Comment": "HostedZone created by Route53 Registrar", 
                "PrivateZone": false
            }, 
            "Id": "/hostedzone/Z2TLEXAMPLEZONE", 
            "Name": "abelperez.info."
        }
    ]
}

What's the plan?

The scenario I'm covering here is probably one of the most common. I want to create a new subdomain that points to my external IP and update that record as the external IP changes. To achieve that, we'll follow the steps:

  • Find out the external IP.
  • Get the desired hosted zone.
  • Create the A record (or update it if it already exists).
  • Set it up to run regularly (cron job).

Script step by step

Lest's start like any script, the shebang indicator. Then, input variables, in this case we define the domain name and the public record, notice the "." at the end of the public record, this is required by route 53.

#!/bin/bash
DOMAIN_NAME=abelperez.info
PUBLIC_RECORD=rpi.$DOMAIN_NAME.

Find the external IP, there are many ways to obtain this value, but since we're using AWS, let's get it from checkip endpoint. For debugging purposes we echo the IP found.

IP=$(curl -s http://checkip.amazonaws.com)
echo Found IP=$IP

Determine the hosted zone id, this step is optional if you want to use a hard-coded zone id that can be copied from the AWS Route 53 console. In this case I've invoked list-zones-by-name command which gives the hosted zone information for a specific domain, the format is as above in the prerequisites section. To extract the exact id I used a combination of jq and sed commands.

R53_HOSTED_ZONE=`aws route53 list-hosted-zones-by-name \
--dns-name $DOMAIN_NAME \
--query HostedZones \
| jq -r ".[] | select(.Name == \"$DOMAIN_NAME.\").Id" \
| sed 's/\/hostedzone\///'`

Now that we have all the required information, let's prepare the A record JSON input that will be used by aws route53 change-resource-record-sets command. the action "UPSERT" creates or updates the record accordingly, otherwise we'd need to manually check if it exists or not before updating it. Again for debugging purposes, we echo the final JSON.

read -r -d '' R53_ARECORD_JSON << EOM
{
  "Changes": [
    {
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "$PUBLIC_RECORD",
        "Type": "A",
        "TTL": 300,
        "ResourceRecords": [
          {
            "Value": "$IP"
          }
        ]
      }
    }
  ]
}
EOM

echo About to execute change
echo "$R53_ARECORD_JSON" 

With the input ready, we invoke the Route 53 command to create/update the A record. This command will return immediately but the operation will take a few seconds to complete. If we want to make sure we know when it's completed, then we need to get the change id returned by the command, in this case, it's stored in the variable. This is optional, as well as the next step.

R53_ARECORD_ID=`aws route53 change-resource-record-sets \
--hosted-zone-id $R53_HOSTED_ZONE \
--change-batch "$R53_ARECORD_JSON" \
--query ChangeInfo.Id \
--output text`

echo Waiting for the change to update.

At this point, the request to create/update the A record is in progress, we could finish the script right now. However, I'd like to get a final confirmation that the operation has been completed. To do that, we can use the wait command providing the change id from the previous request.

aws route53 wait resource-record-sets-changed --id $R53_ARECORD_ID

echo Done.

And now it's actually completed. You should save this to a file, maybe name it update-dns.sh. It will need execute permission.

$ chmod u+x ./update-dns.sh

Set up cron job

In this particular instance I want this script to run in one of my raspberry pis, so I proceeded to copy the script file to pi user's home directory (/home/pi/)

pi@raspberrypi:~ $ ls
update-dns.sh

Now we'll set up a user cron job, we do that by running the command crontab -u followed by the user under which the job should run, this job doesn't require any system-wide privilege therefore it can run as the regular user, pi. -e to edit the file.

pi@raspberrypi:~ $ crontab -u pi -e

All we need to do is append the following text to the file content you are prompted with. The first two numbers correspond to minute and hour to run. For testing purposes I set it near the current time at the moment of testing it. The script output is appended/redirected to a text file so we can review afterwards if desired.

23 22 * * * /home/pi/update-dns.sh >> /home/pi/cron-update-dns.txt

See it in action

Once we've saved the crontab file, if it's the time where the cron job is about to start, we can test it by running tail -f command

pi@raspberrypi:~ $ tail -f cron-update-dns.txt

Finally, don't forget to update the port forwarding section on your home router so your open port directs traffic to a specific device, in my case, to that particular raspberry pi.

Sunday 17 February 2019

How to Install docker on raspbian stretch

As part of one of my recent experiments I wanted to install docker on my raspberry pi. The procedure nowadays seems significantly better than a couple of years ago when I first tried. However, there are always some little details that can be a bit frustrating when following steps.

Although the process to install docker is well described in their website, it's focused on Debian distribution. We all know that Raspbian is based on Debian and therefore most of the instructions for will apply without much trouble.

That said, I will detail the exact steps I followed to get docker installed on my raspberry pi, this procedure was tested on 3B, Zero and Zero W.

First, update the package list, if you haven't done yet after setting up your raspberry pi.

$ sudo apt-get update

Install packages to allow apt to use a repository over HTTPS.

$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common

Add Docker’s official GPG key. Note that here the url points to raspbian directory and not to debian as per the original instructions.

curl -fsSL https://download.docker.com/linux/raspbian/gpg | sudo apt-key add -

The next step recommended didn't work for me.

sudo add-apt-repository \
   "deb [arch=armhf] https://download.docker.com/linux/raspbian \
   $(lsb_release -cs) \
   stable"

It failed with an error similar to this:

Traceback (most recent call last):
File "/usr/bin/add-apt-repository", line 95, in 
  sp = SoftwareProperties(options=options)
File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py"
, line 109, in __init__
  self.reload_sourceslist()
File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py"
, line 599, in reload_sourceslist
  self.distro.get_sources(self.sourceslist)
File "/usr/lib/python3/dist-packages/aptsources/distro.py", line 89, 
in get_sources (self.id, self.codename))
aptsources.distro.NoDistroTemplateException: Error: could not find a 
distribution template for Raspbian/stretch

Instead, I tried this other way, more explicitly setting up the repository into a new docker.list sources file.

$echo "deb [arch=armhf] https://download.docker.com/linux/raspbian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list

With the new source repository in the list, update the package list again.

$ sudo apt-get update

Now we are in a position to install docker-ce from the repository, in this particular case, I ran into an issue with the latest version, documented in github basically they suggest to go back to 18.06.1 which we can get from the command below as per official docker instructions on how to get a specific version.

$ apt-cache madison docker-ce
docker-ce | 5:18.09.0~3-0~raspbian-stretch | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
docker-ce | 18.06.2~ce~3-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
docker-ce | 18.06.1~ce~3-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages
docker-ce | 18.06.0~ce~3-0~raspbian | https://download.docker.com/linux/raspbian stretch/stable armhf Packages

As of this writing, the version I tried is 18.06.2~ce~3-0~raspbian, which can be installed using the apt-get command.

$ sudo apt-get install docker-ce=18.06.2~ce~3-0~raspbian containerd.io

Also notice that by using this vesion, there's no need to install docker-ce-cli.

Testing everything is running as expected.

$ sudo docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 18.06.2-ce
 --- more data --- 

Run the hello world. Usually we get the command sudo docker run hello-world but that only works on "normal" architectures, in this case, we're using ARMv6/7, but to make it more compatible with all of them, I ran the image from arm32v5/hello-world.

$ sudo docker run arm32v5/hello-world
Unable to find image 'arm32v5/hello-world:latest' locally
latest: Pulling from arm32v5/hello-world
590e13f69e4a: Pull complete 
Digest: sha256:8a6a26a494c03e91381161abe924a39a2ff72de13189edcc2ed1695e6be12a5f
Status: Downloaded newer image for arm32v5/hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (arm32v5)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

And that's all, docker installed up and running on a raspberry pi.

UPDATE:

If you'd like to get rid of the sudo command every time you use docker, then you can run the following command found in the official guide as a post-installation step. You don't need to create the group as it's already created by docker installation. It adds your user to the docker group.

$ sudo usermod -aG docker $USER

You need to log out and back in for this change to take effect. After that you'll be able to run all docker commands as a regular user.

Thursday 7 February 2019

Querying Aurora serverless database remotely using Lambda - part 3

This post is part of a series

In the previous part, we've set up the Aurora MySql cluster. At this point we can start creating the client code to allow querying.

The Lambda code

In this example I'll be using .NET Core 2.1 as Lambda runtime and C# as programming language. The code is very simple and should be easy to port to your favourite runtime/language.

Lambda Input

The input to my function consists of two main pieces of information: database connection information and the query to execute.

    public class ConnectionInfo
    {
        public string DbUser { get; set; }
        public string DbPassword { get; set; }
        public string DbName { get; set; }
        public string DbHost { get; set; }
        public int DbPort { get; set; }
    }

    public class LambdaInput
    {
        public ConnectionInfo Connection { get; set; }

        public string QueryText { get; set; }
    }

Lambda Code

The function itself returns a List of dictionary where each item of the list represents a "record" from the query result, these are in a key/value form where key is the "field" name and the value is the what comes form the query.

    public List<Dictionary<string, object>> RunQueryHandler(LambdaInput input, ILambdaContext context)
    {
        var cxnString = GetCxnString(input.Connection);
        var query = input.QueryText;

        var result = new List<Dictionary<string, object>>();
        using (var conn = new MySql.Data.MySqlClient.MySqlConnection(cxnString))
        {
            var cmd = GetCommand(conn, query);
            var reader = cmd.ExecuteReader();

            var columns = new List<string>();

            for (int i = 0; i < reader.FieldCount; i++)
            {
                columns.Add(reader.GetName(i));
            }

            while (reader.Read())
            {
                var record = new Dictionary<string, object>();
                foreach (var column in columns)
                {
                    record.Add(column, reader[column]);
                }
                result.Add(record);
            }
        }
        return result;
    }

Support methods

Here is the code of the missing methods: GetCxnString and GetCommand not really complicated.

    private static readonly string cxnStringFormat = "server={0};uid={1};pwd={2};database={3};Connection Timeout=60";

    private string GetCxnString(ConnectionInfo cxn)
    {
        return string.Format(cxnStringFormat, cxn.DbHost, cxn.DbUser, cxn.DbPassword, cxn.DbName);
    }

    private static MySqlCommand GetCommand(MySqlConnection conn, string query)
    {
        conn.Open();
        var cmd = conn.CreateCommand();
        cmd.CommandText = query;
        cmd.CommandType = CommandType.Text;
        return cmd;
    }

Project file

Before compiling and packaging the code we need a project file, assuming you don't have one already, this is how it looks like to be able to run in AWS Lambda environment.

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Amazon.Lambda.Core" Version="1.0.0" />
    <PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.3.0" />
    <PackageReference Include="MySql.Data" Version="8.0.13" />
    <PackageReference Include="Newtonsoft.Json" Version="11.0.2" />
  </ItemGroup>

  <ItemGroup>
    <DotNetCliToolReference Include="Amazon.Lambda.Tools" Version="2.2.0" />
  </ItemGroup>


Preparing Lambda package

Assuming you have both the code and csproj file in the current directory, we just run dotnet lambda package command as per below, where -c sets the Configuration to release, -f sets the target framework to netcoreapp2.1 and -o sets the output zip file name.

$ dotnet lambda package -c release -f netcoreapp2.1 -o aurora-lambda.zip
Amazon Lambda Tools for .NET Core applications (2.2.0)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

Executing publish command
Deleted previous publish folder
... invoking 'dotnet publish', working folder '/home/abel/Downloads/aurora_cluster_sample/bin/release/netcoreapp2.1/publish'

( ... ) --- removed code for brevity ---

... zipping:   adding: aurora.lambda.deps.json (deflated 76%)
Created publish archive (/home/abel/Downloads/aurora_cluster_sample/aurora-lambda.zip).
Lambda project successfully packaged: /home/abel/Downloads/aurora_cluster_sample/aurora-lambda.zip

Next, we upload the resulting zip file to an S3 bucket of our choice. In this example I'm using a bucket named abelperez-temp and I'm uploading the zip file to a folder named aurora-lambda so I keep some form of organisation in my file directory.

$ aws s3 cp aurora-lambda.zip s3://abelperez-temp/aurora-lambda/
upload: ./aurora-lambda.zip to s3://abelperez-temp/aurora-lambda/aurora-lambda.zip

Lambda stack

To create the Lambda function, I've put together a CloudFormation template that includes:

  • AWS::EC2::SecurityGroup contains outbound traffic rule to allow port 3306
  • AWS::IAM::Role contains an IAM role to allow the Lambda function to write to CloudWatch Logs and interact with ENIs
  • AWS::Lambda::Function contains the function definition

Here is the full template, the required parameters are VpcId, SubnetIds and LambdaS3Bucket which we should get from previous stacks' outputs. The template outputs the function full name, which we'll need to be able to invoke it later.

Special attention to the Lambda function definition, the property Handler, in .NET runtime is in the form of AssemblyName::Namespace.ClassName::MethodName and the property Code containing the S3 location of the zip file we uploaded earlier.

Description: Template to create a lambda function 

Parameters: 
  LambdaS3Bucket:
    Type: String
  DbClusterPort: 
    Type: Number
    Default: 3306
  VpcId: 
    Type: String
  SubnetIds: 
    Type: CommaDelimitedList

Resources:
  LambdaSg:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow outbound traffic to MySQL host
      VpcId:
        Ref: VpcId
      SecurityGroupEgress:
        - IpProtocol: tcp
          FromPort: !Ref DbClusterPort
          ToPort: !Ref DbClusterPort
          CidrIp: 0.0.0.0/0

  AWSLambdaExecutionRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lambda.amazonaws.com
            Action: sts:AssumeRole
      Path: /
      Policies:
        - PolicyName: PermitLambda
          PolicyDocument:
            Version: 2012-10-17
            Statement:
            - Effect: Allow
              Action:
              - logs:CreateLogGroup
              - logs:CreateLogStream
              - logs:PutLogEvents
              - ec2:CreateNetworkInterface
              - ec2:DescribeNetworkInterfaces
              - ec2:DeleteNetworkInterface
              Resource: 
                - "arn:aws:logs:*:*:*"
                - "*"
  HelloLambda:
    Type: AWS::Lambda::Function
    Properties:
      Handler: aurora.lambda::project.lambda.Function::RunQueryHandler
      Role: !GetAtt AWSLambdaExecutionRole.Arn
      Code:
        S3Bucket: !Ref LambdaS3Bucket
        S3Key: aurora-lambda/aurora-lambda.zip
      Runtime: dotnetcore2.1
      Timeout: 30
      VpcConfig:
        SecurityGroupIds:
          - !Ref LambdaSg
        SubnetIds: !Ref SubnetIds

Outputs:
  LambdaFunction:
    Value: !Ref HelloLambda

To deploy this stack we use the following command where we pass the parameters specific to our VPC (VpcId and SubnetIds) as well as the S3 bucket name.

$ aws cloudformation deploy --stack-name fn-stack \
--template-file aurora_lambda_template.yml \
--parameter-overrides VpcId=vpc-0b442e5d98841996c SubnetIds=subnet-013d0bbb3eca284a2,subnet-00c67cfed3ab0a791 LambdaS3Bucket=abelperez-temp \
--capabilities CAPABILITY_IAM

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - fn-stack

Let's get the outputs as we'll need this information later. We have the Lambda function full name.

$ aws cloudformation describe-stacks --stack-name fn-stack --query Stacks[*].Outputs
[
    [
        {
            "OutputKey": "LambdaFunction",
            "OutputValue": "fn-stack-HelloLambda-C32KDMYICP5W"
        }
    ]
]

Invoking Lambda function

Now that we have deployed the function and we know its full name, we can invoke it by using dotnet lambda invoke-function command. Part of this job is to prepare the payload which is a JSON in put corresponding to the Lambda input defined above.

{
    "Connection": {
        "DbUser": "master", 
        "DbPassword": "Aurora.2019", 
        "DbName": "dbtest", 
        "DbHost": "db-stack-auroramysqlcluster-xxx.rds.amazonaws.com", 
        "DbPort": 3306
    }, 
    "QueryText":"show databases;"
}

Here is the command to invoke the Lambda function, including the payload parameter encoded to escape the quotes and all in a single line. There are better ways to do this, but for the sake of this demonstration, it's good enough.

$ dotnet lambda invoke-function \
--function-name fn-stack-HelloLambda-C32KDMYICP5W \
--payload "{ \"Connection\": {\"DbUser\": \"master\", \"DbPassword\": \"Aurora.2019\", \"DbName\": \"dbtest\", \"DbHost\": \"db-stack-auroramysqlcluster-xxx.rds.amazonaws.com\", \"DbPort\": 3306}, \"QueryText\":\"show databases;\" }" \
--region eu-west-1

Amazon Lambda Tools for .NET Core applications (2.2.0)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

Payload:
[{"Database":"information_schema"},{"Database":"dbtest"},{"Database":"mysql"},{"Database":"performance_schema"}]

Log Tail:
START RequestId: 595944b5-73bb-4536-be92-a42652125ba8 Version: $LATEST
END RequestId: 595944b5-73bb-4536-be92-a42652125ba8
REPORT RequestId: 595944b5-73bb-4536-be92-a42652125ba8  Duration: 11188.62 ms   Billed Duration: 11200 ms       Memory Size: 128 MB     Max Memory Used: 37 MB

Now we can see the output in the Payload section. And that's how we can query remotely any Aurora serverless cluster without having to set up any EC2 instance. This could be extended to handle different SQL operations such as Create, Insert, Delete, etc.

Monday 4 February 2019

Querying Aurora serverless database remotely using Lambda - part 2

This post is part of a series

In the previous part, we've set up the base layer to deploy our resources. At this point we can create the database cluster.

Aurora DB Cluster

Assuming we have our VPC ready with at least two subnets to comply with high availability best practices, let's create our cluster, I've put together a CloudFormation template that includes:

  • AWS::EC2::SecurityGroup contains inbound traffic rule to allow port 3306
  • AWS::RDS::DBSubnetGroup contains a group of subnets to deploy the cluster
  • AWS::EC2::DBCluster contains all the parameters to create the database cluster

Here is the full template, the only required parameters are VpcId and SubnetIds, but feel free to override any of the database cluster parameters such as database name, user name, password, etc. The template outputs the IDs corresponding to newly created resources such as the database cluster DNS endpoint, port and the security group.

Description: Template to create a serverless aurora mysql cluster

Parameters: 
  DbClusterDatabaseName: 
    Type: String
    Default: dbtest
  DbClusterIdentifier: 
    Type: String
    Default: serverless-mysql-aurora
  DbClusterParameterGroup: 
    Type: String
    Default: default.aurora5.6
  DbClusterMasterUsername: 
    Type: String
    Default: master
  DbClusterMasterPassword: 
    Type: String
    Default: Aurora.2019
  DbClusterPort: 
    Type: Number
    Default: 3306
  VpcId: 
    Type: String
  SubnetIds: 
    Type: CommaDelimitedList

Resources:
  DbClusterSg:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow MySQL port to client host
      VpcId:
        Ref: VpcId
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: !Ref DbClusterPort
          ToPort: !Ref DbClusterPort
          CidrIp: 0.0.0.0/0

  DbSubnetGroup: 
    Type: "AWS::RDS::DBSubnetGroup"
    Properties: 
      DBSubnetGroupDescription: "aurora subnets"
      SubnetIds: !Ref SubnetIds

  AuroraMysqlCluster:
    Type: AWS::RDS::DBCluster
    Properties:
      DatabaseName:
        Ref: DbClusterDatabaseName
      DBClusterParameterGroupName:
        Ref: DbClusterParameterGroup
      DBSubnetGroupName:
        Ref: DbSubnetGroup
      Engine: aurora
      EngineMode: serverless
      MasterUsername:
        Ref: DbClusterMasterUsername
      MasterUserPassword:
        Ref: DbClusterMasterPassword
      ScalingConfiguration:
        AutoPause: true
        MinCapacity: 2
        MaxCapacity: 4
        SecondsUntilAutoPause: 1800
      VpcSecurityGroupIds:
        - !Ref DbClusterSg
        
Outputs:
  DbClusterEndpointAddress:
    Value: !GetAtt AuroraMysqlCluster.Endpoint.Address
  DbClusterEndpointPort:
    Value: !GetAtt AuroraMysqlCluster.Endpoint.Port
  DbClusterSgId:
    Value: !Ref DbClusterSg

To deploy this stack we use the following command where we pass the parameters specific to our VPC (VpcId and SubnetIds).

$ aws cloudformation deploy --stack-name db-stack \
--template-file aurora_cluster_template.yml \
--parameter-overrides VpcId=vpc-0b442e5d98841996c SubnetIds=subnet-013d0bbb3eca284a2,subnet-00c67cfed3ab0a791

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - db-stack

Let's get the outputs as we'll need this information later. We have the cluster endpoint DNS name and the port as per our definition.

$ aws cloudformation describe-stacks --stack-name db-stack --query Stacks[*].Outputs
[
    [
        {
            "OutputKey": "DbClusterEndpointAddress",
            "OutputValue": "db-stack-auroramysqlcluster-1d1udg4ringe4.cluster-cnfxlauucwwi.eu-west-1.rds.amazonaws.com"
        },
        {
            "OutputKey": "DbClusterSgId",
            "OutputValue": "sg-072bbf2078caa0f46"
        },
        {
            "OutputKey": "DbClusterEndpointPort",
            "OutputValue": "3306"
        }
    ]
]

In the next part, we'll create the Lambda function to query this database remotely.

Wednesday 30 January 2019

Querying Aurora serverless database remotely using Lambda - part 1

This post is part of a series

Why aurora serverless?

For those who like experimenting with new technology, AWS feeds us with a lot of new stuff every year at re:Invent conference, and many more times. In August 2018, Amazon announced its general availability. I was intrigued by this totally new way to doing database, so I started to play with it.

The first road block I found was connectivity from my local environment. I wanted to connect to my new cluster using my traditional MySQL Workbench client. It turned out to be one of the limitations clearly explained by Amazon, and as of today:

You can't give an Aurora Serverless DB cluster a public IP address. You can access an Aurora Serverless DB cluster only from within a virtual private cloud (VPC) based on the Amazon VPC service.

Most common workarounds involve the use of an EC2 instance to either run a MySQL client from there or SSH tunnel and allow connections from outside of the VPC. In both cases we'll be charged for the use of the EC2 instace. At the moment of this writing there is a solution for that, but still in beta: Data API.

With all that said, I decided to explore my own way in the meantime by creating a serverless approach involving Lambda to query my database.

Setting up the base - VPC

First, we need a VPC. For testing only, it doesn't really matter if we just use the default VPC in the current region. But if you'd like to get started with a blue print, there is this public CloudFormation template, it contains a sample VPC with two public subnets and two private subnets as well as all the required resources to guarantee connectivity (IGW, NAT, Route tables, etc.).

I prefer to isolate my experiments from the rest of the resources, that's why I came up with a template containing the bare minimum to get a VPC up and running. It only includes:

  • AWS::EC2::VPC the VPC itself - default CIDR block 10.192.0.0/16
  • AWS::EC2::Subnet private subnet 1 - default CIDR block 10.192.20.0/24
  • AWS::EC2::Subnet private subnet 2 - default CIDR block 10.192.21.0/24

Here is the full template, the only required parameter is EnvironmentName, but feel free to override any of the CIDR blocks. The template outputs the IDs corresponding to newly created resources such as the VPC and the subnets.

Description: >-
  This template deploys a VPC, with two private subnets spread across 
  two Availability Zones. This VPC does not provide any internet 
  connectivity resources such as IGW, NAT Gw, etc.

Parameters:
  EnvironmentName:
    Description: >-
      An environment name that will be prefixed to resource names
    Type: String

  VpcCIDR:
    Description: >-
      Please enter the IP range (CIDR notation) for this VPC
    Type: String
    Default: 10.192.0.0/16

  PrivateSubnet1CIDR:
    Description: >-
      Please enter the IP range (CIDR notation) for the private subnet 
      in the first Availability Zone
    Type: String
    Default: 10.192.20.0/24

  PrivateSubnet2CIDR:
    Description: >-
      Please enter the IP range (CIDR notation) for the private subnet 
      in the second Availability Zone
    Type: String
    Default: 10.192.21.0/24

Resources:
  VPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: !Ref VpcCIDR
      EnableDnsSupport: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: !Ref EnvironmentName

  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 0, !GetAZs '' ]
      CidrBlock: !Ref PrivateSubnet1CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName} Private Subnet (AZ1)

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      AvailabilityZone: !Select [ 1, !GetAZs '' ]
      CidrBlock: !Ref PrivateSubnet2CIDR
      MapPublicIpOnLaunch: false
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName} Private Subnet (AZ2)

Outputs:
  VPC:
    Description: A reference to the created VPC
    Value: !Ref VPC

  PrivateSubnet1:
    Description: A reference to the private subnet in the 1st Availability Zone
    Value: !Ref PrivateSubnet1

  PrivateSubnet2:
    Description: A reference to the private subnet in the 2nd Availability Zone
    Value: !Ref PrivateSubnet2

To create a stack from this template we run the following command (or go to AWS console and upload the template)

$ aws cloudformation deploy --stack-name vpc-stack \
--template-file vpc_template.yml \
--parameter-overrides EnvironmentName=Dev

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - vpc-stack

After successful stack creation, we can get the outputs as we'll need them for the next step.

$ aws cloudformation describe-stacks --stack-name vpc-stack --query Stacks[*].Outputs
[
    [
        {
            "Description": "A reference to the private subnet in the 1st Availability Zone",
            "OutputKey": "PrivateSubnet1",
            "OutputValue": "subnet-013d0bbb3eca284a2"
        },
        {
            "Description": "A reference to the private subnet in the 2nd Availability Zone",
            "OutputKey": "PrivateSubnet2",
            "OutputValue": "subnet-00c67cfed3ab0a791"
        },
        {
            "Description": "A reference to the created VPC",
            "OutputKey": "VPC",
            "OutputValue": "vpc-0b442e5d98841996c"
        }
    ]
]

In the next part, we'll create the Database cluster using these resources as a base layer.