In my earlier post, we went over how to develop a static blog with Jekyll. In this post, we will go through how to set up a static site/blog on AWS S3 and make it easy to deploy with one command. Let’s get started.

If you don’t have an AWS account, now is a good time to create one. AWS makes it easier to deploy anything at any scale. On sign up, AWS offers a bunch of free stuff for a year to play around.

AWS services I’ll use:

There are hundreds of services in AWS, but for a static site like this one, we will look into 3 services. Let’s start with a brief overview of them.

  • IAM (Identity and Access Management) is used to set up users and their permissions. It’s a free service offered by AWS to keep infrastructure secure with users and group policy. Good understanding of IAM is essential for any serious production application.
  • S3 (Simple Storage Service) is an object storage used to store files and serve those files as they are. This means S3 will not run scripts, binary executables or any application directly. It is mainly used to store images, videos or logs for a low cost.
  • Route53 is AWS DNS management service. This is where I’ll add my blog domain name and point to S3. This might cost $0.50 per domain. CloudFlare can be used instead which offers a free service.

Note: You might get overwhelmed by the amount of configuration required for this blog, but Amazon S3 provides 99.999999999% durability and 99.99% availability. The same idea can be used to host a static site for a big release of a trailer or a product preview and be assured that huge amount of traffic won’t bring down the site.

Create a user using IAM:

Using IAM we will create a user who will have the permission to deploy blog files directly from computer to S3 with one command. Before that, log into AWS console, go to IAM section. Here you should see security warnings. It’s a good practice to activate multi-factor authentication for the root account also use a separate account for everything else and keep the root account safe and locked, but that’s out of scope for this post.

After securing your account, to create a dedicated user; click on “Users” then “Add User”, enter a username (e.g. Deployer) and select “Programmatic access” as the Access Type. You don’t need to add any permission to the user, so skip the permission section then review and create the user. Once the user is created you will get an Access Key ID and a Secret Access Key, remember to save them, as these are only presented once. Now go to users and click “Deployer”, and note down user’s ARN (Amazon resource name), it looks something like arn:aws:iam::xxxxxxxxxxx:user/Deployer.

Setup S3 for files:

We will create a bucket (directory) on S3 to store all files for the blog. To do that navigate to AWS S3 console. Create a bucket with a unique name and accept everything as default. After the bucket is created click on it, then click Properties. You should see an option named “Static Website” click on it and select “Use this bucket to host a website”, then enter index.html in the Index Document field and save changes.

By default when a bucket is created it’s restricted for everyone. To make files in the bucket visible to the world, permission (known as bucket policy) needs to be added. To add permission select the bucket, click on “Permission”, then click “Bucket Policy” and paste the JSON object as below:

  
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::479935086702:user/DEPLOY_USER_NAME"
          },
          "Action": [
            "s3:GetObject",
            "s3:PutObject",
            "s3:ListBucket"
          ],
          "Resource": [
            "arn:aws:s3:::YOUR_BUCKET_NAME",
            "arn:aws:s3:::YOUR_BUCKET_NAME/*"
          ]
        },
        {
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
        }
      ]
    }
  

This might look gobbledygook but do not worry. Let me explain what this piece of code does. As you can see, there are two properties in that JSON object- “Version” and “Statement”. The version is reserved by AWS to introduce a new version while making it backwards compatible. In the statement section, all the permission for this bucket goes in. The first object in Statement allows the user (we have created earlier) to retrieve, put or list files in the bucket. The second object allows anyone to see files from the bucket, but they won’t able to put or list anything. Make more sense now, right?

Setting up using Rotute53:

Now we need to point a domain name to S3, so when anyone enters the domain name it will go to the bucket on S3 and serve files. For this step open up the Route53 console and click Create Hosted Zone. Add your domain name and select “Public hosted zone” as the type of the zone then click create. AWS will give you a list of name-servers, that should be used with your domain provider to point the domain name to AWS. Updating nameservers varies on each domain name provider, your domain provider should have docs on this.

Again on Route53, click on recently created hosted zone and add a new record set. Leave the name field empty and keep the type as “A - IPv4 address”. Click Yes radio button next to Alias. When you click the input box Alias Target you should see S3 static site URL as suggested, select it. Now the domain name is pointed to the bucket on S3.

Let’s automate the deployment

You probably don’t want to upload each file by hand when it’s changed, that would be tedious. Luckily AWS-CLI has a command that we can use to easily sync a local directory with our S3 bucket.

Follow aws-cli installation guideline to install on your machine. When it’s installed, run aws configure. Use Access Key ID and Secret Access Key of the user that was created before. Now try aws s3 ls, that should list all buckets in S3. To sync the local copy of the blog with the bucket run aws s3 sync _site/. s3://YOUR_BUCKETNAME for the directory. Remeber, Jekyll puts compiled files under _site, and you just need to upload files that are in _site directory.

  
    $ aws configure
    AWS Access Key ID [None]: Access-Key-ID
    AWS Secret Access Key [None]: Secret-Access-Key
    Default region name [None]: us-west-2
    Default output format [None]: json

    $ aws s3 sync _site/. s3://YOUR_BUCKETNAME
  

Now try to access the domain, if all goes well you should see your blog. Remember nameserver can take some time to propagate over the Internet, so if you can’t see anything, perhaps check an hour later.

I know this process looks complex and there are easier ways to have a blog. However, this is probably the cheapest and easiest way to make a site that’s reliable and scalable. It also gives you something to work with AWS (that is if you want to).