Saturday 27 January 2018

Serverless static website - part 4

Security is an aspect that should never be left behind, even if we are only displaying content to general public. As per Google's strategy to favour secure traffic, they've started to penalise websites that are only on HTTP, we are going to follow this idea and add HTTPS and SSL certificate to this website.

Requesting a certificate

On AWS console, we have a Certificate Manager under Security, Identity & Compliance category. Once there, we have two choices: Import a certificate or Request a certificate. We'll use the request option. It's important for this to work with further steps to make sure we are in N. Virginia region before requesting a certificate. Click on Request a certificate button

In this example, I'm interested in creating a certificate that will validate the main domain abelperez.info, the subdomain www.abelperez.info and any subdomain I create in the future, that is done by using the wildcard *.abelperez.info.

Next step is validate the identity, basically we need to prove that we in fact own the domain we are creating the certificate for. In this case I chose Email validation. The request in progress should look like this.

Verifying domain ownership

At this point, we should receive an email (or a few of them) asking to validate our email address by following a link.

The verification page looks like this one, just click I approve.

It's important to note that in this case, because I'm requesting a certificate for *.abelperez.info and abelperez.info, I'll receive two requests and I have to verify both of them, if that's your case, make sure you've verified all the requests, otherwise the operation will be incomplete, and the certificate won't be issued until all validation are completed. Once it's verified, the certificate should look like this.

Now that we have valid certificate for our domain, it can be attached to other entities like CloundFront CDN distribution, Elastic Load balancer, etc.

Serverless static website - part 3

To www or not to www ? well, that's not the real question. Apparently, nowadays this topic has changed a little bit from the early times of the World Wide Web. In this article, there are some interesting pieces of information for those interested in that topic.

The bottom line is whatever your choice is, there has to be consistency, in this example, the canonical name will be www.abelperez.info, but I also want that if anyone browses just abelperez.info, they have to be redirected to the www version. How do we do that on AWS ?

Creating the redirect bucket

As discussed earlier, an S3 bucket can be configured as a website, but it will only listen on a single hostname (www.abelperez.info), therefore if we want to listen on another hostname (abelperez.info), we need another S3 bucket, again matching the bucket name with the hostname.

On AWS S3 console, create a new bucket named (your domain name without www). This time, instead of selecting Use this bucket to host a website, let's select Redirect requests where:

  • Target bucket or domain: is the www hosthame
  • Protocol: will be http for now

The configuration should look like this

Creating the Record set

Now, we need to route DNS requests to the new website previously created. To do that, let's go to Route 53 console, select your hosted zone, then click on Create Record Set button. Specify the following values:

  • Name: leave in blank - it will be non www DNS entry
  • Type: A
  • Alias: Yes
  • Alias Target: You should see the bucket name in the list

What have we done so far ?

We have created two DNS entries pointing to two different S3 buckets (effectively two web servers). One web server (www) will serve the content as explained earlier. The other web server (non www) will issue 301 redirect code to the www version. This way, the browser will request again, but now with the www version which will get served by the www web server, delivering the content as expected.

The following diagram illustrates the workflow

                                                 +----------------+
                GET abelperez.info               |   S3 Bucket    |
                +------------------------------> | abelperez.info +----+
                |                                |                |    |
          +-----+-----+                          +----------------+    |
          |           |                                                |
      +-> | Route 53  |                                                |
      |   |           |                                                |
      |   +-----+-----+                      +--------------------+    |
      |         |                            |     S3 Bucket      |    |
      |         +--------------------------> | www.abelperez.info |    |
      |         GET www.abelperez.info       |                    |    |
      |                                      +----------+---------+    |
+-----+-----+                                           |              |
|           | <-----------------------------------------+              |
|  Browser  |        HTTP 200 - OK                                     |
|           | <--------------------------------------------------------+
+-----------+        HTTP 301 - Permanent Redirect to www

Let's test it

To test this behaviour, one easy way is using the universal tool CURL, we'll use two switches:

  • -L Follows the redirects
  • -I Fetches headers only

For more information about CURL, see the manpage.

There are two scenarios to test: first, when we try to hit non www host. We can see the first request gets HTTP 301 code and the second gets HTTP 200.

abel@ABEL-DESKTOP:~$ curl -L -I http://abelperez.info
HTTP/1.1 301 Moved Permanently
x-amz-id-2: pLVO9p67k51FJpZCSbF2LxJyrB8w9WyEkgNXHF0Zq8twe3Dw1ud3OiIHRzN0y5B4wDvwngLGEBg=
x-amz-request-id: AD13DD8436422AAC
Date: Sat, 27 Jan 2018 17:50:19 GMT
Location: http://www.abelperez.info/
Content-Length: 0
Server: AmazonS3

HTTP/1.1 200 OK
x-amz-id-2: s/6E2lV7nYtfBq96Qftwip7lzMvIkOMuIq0jbwCisYU0V7ujMRisPuqPsNt2vMuBWFIYuwkqLFs=
x-amz-request-id: 1C258F07FD183836
Date: Sat, 27 Jan 2018 17:50:19 GMT
Last-Modified: Wed, 08 Nov 2017 09:10:40 GMT
ETag: "a339a5d4a0ad6bb215a1cef5221b0f6a"
Content-Type: text/html
Content-Length: 85
Server: AmazonS3

The second scenario is trying to go directly to www host, and the request gets HTTP 200 straight away.

abel@ABEL-DESKTOP:~$ curl -L -I http://www.abelperez.info
HTTP/1.1 200 OK
x-amz-id-2: apCkPouaYzoy5gjemxU+BjDLbQxxE46EUhDXBHirq6PK0OZbubP2BVhWllxlSV99zg5UB3tGbd8=
x-amz-request-id: D928A1DF3B3EB0DE
Date: Sat, 27 Jan 2018 18:18:19 GMT
Last-Modified: Wed, 08 Nov 2017 09:10:40 GMT
ETag: "a339a5d4a0ad6bb215a1cef5221b0f6a"
Content-Type: text/html
Content-Length: 85
Server: AmazonS3

Wednesday 24 January 2018

Serverless static website - part 2

In the previous part I explained how to create a s3 bucket and make it behave like a web server, then we put some files on it and were able to publicly browse using the HTTP endpoint provided by s3. However, this is not the most appealing url to associate with our website, we most likely want to own a domain name.

Buying a domain name

If we want a domain, there are a lot of places where you can buy one, they are usually cheap, unless we go crazy with the name. Here is the list of ICANN-Accredited Registrars where you can choose your favourite. There is one particular domain registrar that might be convenient, if you search amazon in that list, it will come up, yes, Amazon also sell domain names. In this example I purchased mine through them, just to keep everything in the same place, you are free to choose your domain registrar, it doesn't make a big difference when setting everything up.

Route 53 is the service that manages DNS on AWS. To register a domain, once in Route 53 console, go to Domains / Registered domains. Then Choose a domain name with your favourite TLD and follow the process, it's that simple.

When you register a new domain, it can take some times a couple of days to complete the process, this is indicated in the console by listing the new domain under Pending requests, once the process is complete it will appear under Registered domain. Your console should look like this by then. More information about registering a domain with Route 53, see here.

Creating a Public Hosted Zone

Once you have domain name of yours, whether registered with Route 53 or with another registrar, it's the time to create a public hosted zone, which is basically a container for all DNS records associated with your domain and subdomains. To do that, go to Route 53 console, choose Hosted Zones, hit the Create Hosted Zone button, then provide the domain name, and optionally a comment, also make sure Public Hosted Zone is selected. When it's done, it should look like this.

If you chose to buy your domain with a different registrar, then you'll have to update your name servers with the new ones created by the Hosted Zone. See AWS docs on how to do that, here.

Creating a Record Set

In AWS world, a Record Set is similar to standard DNS record, but with some extensions. In this particular case, we'll use Alias. To create a record, select the Hosted Zone and click Create Record Set button. Specify the following values:

  • Name: www
  • Type: A
  • Alias: Yes
  • Alias Target: You should see the bucket name in the list

It's important to note that the name of the bucket where we are hosting our files must match the record name. So in this case, my record is www.abelperez.info, that's exactly the name of the bucket, otherwise Route 53 won't be able to make the association between those two.

Click on Create button, and that's it. We've just linked the domain name to a serverless web server for static files.

Let's test it

To test all this, it's very simple, let's browse to http://www.abelperez.info (your domain of course) and see what happens

Your browser content should display this

For more information about Route 53 Alias records values, see here.

Serverless static website - part 1

Cloud computing has become nowadays one of the most popular practices, but it doesn't have to be limited only to a reduced group of specialists. In fact, I'll show you how to take advantage of this new thing called "serverless" on AWS to host a static website

The idea behind this is not to worry about the underlying infrastructure, AWS will handle that. In this case, we'll start by using one of the oldest services they provide, Amazon S3.

Accessing Amazon AWS

If you are new to Amazon AWS, you should start by creating a new account and use the free 1 year trial, which allows a lot of room for playing with it.

In this example I show a real domain I already own and a simple HTML file (very simple)

Creating the S3 bucket

Once you've logged in AWS console, choose Services from the top left menu, then S3 in Storage category or just browse to https://s3.console.aws.amazon.com/s3/

Click on Create Bucket and follow the wizard. It's important to note that if you're trying to use your own domain name within AWS (using Route 53) the name of your bucket has to match the domain name. In this case I'm using www.abelperez.info, therefore, that's my bucket name. Also, if you plan to use a CDN (using CloundFront distribution), this is not relevant, however I encourage this practice as it will organise better the buckets for their purpose. For more information about how to create a bucket, see here

Your S3 console should look similar to this

Converting the S3 bucket in a website

Next step, on the same console page, select the bucket and click on Properties, then find Static website hosting. In the expanded form, choose the option Use this bucket to host a website and specify index and error documents accordingly, typically the same as proposed by the watermark. Save and take not that the format of your HTTP endpoint (public url) is like http://www.abelperez.info.s3-website-eu-west-1.amazonaws.com

It follows the convention of http://<your-bucket-name>.s3-website-<region-where-it-was-created>.amazonaws.com

It should look like this

Granting public read access to all files

Granting public read to all objects is usually deemed as a bad practice, but in this particular case (and for now), we need a way to make public all files we upload to the bucket. S3 Buckets allow a very granular permission system on all objects inside a bucket, that means every time we upload a file, we are responsible for choosing the right permissions. However, this process can be tedious and prone to error (it's easy to forget)

Let's add a Bucket Policy that will entitle all clients request to read the objects inside our bucket. To do that, select the bucket in the console and click on Permissions tab, then click on Bucket Policy and add the following text, just replacing www.abelperez.info with your bucket name

{
    "Version": "2012-10-17",
    "Id": "PublicReadAccess",
    "Statement": [
        {
            "Sid": "GrantPublcReadAccess",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::www.abelperez.info/*"
        }
    ]
}

Creating AWS policies is out of the scope of this post, but basically it allows the GetObject action to all objects inside the bucket named www.abelperez.info

Finally, upload some files

Just for demonstration purposes, I've uploaded a single HTML file, at the moment of this writing is located at https://www.abelperez.info/index.html. You can use any HTML, CSS, Javascript, images files you like.

On way of uploading files to a S3 bucket is by using the console, like we've done all the other operations, select the bucket, and using Upload and Create folder buttons you can recreate your website directory tree. There are better ways to upload files, for example, using the command line or the Rest API.

Your bucket content should look like this

Let's test it

To test all this, it's actually very simple, let's take the HTTP endpoint from earlier and paste in the browser http://www.abelperez.info.s3-website-eu-west-1.amazonaws.com

Your browser content should display this

For full AWS documentation about this topic, see here