Hosting a static site using Hakyll and AWS

January 21, 2019
  Next post »

After getting fed-up with WordPress, I wanted to migrate onto some sort of static site generation. And since you're reading these words right now, it looks like that migration succeeded!

I ended up choosing Hakyll and Amazon S3 as my build and hosting solutions of choice, and I'm pretty happy with both. (Although some of the other elements of the AWS ecosystem, not so much. More on that later.) This post is more as a reference for myself, in case I need to tear things down or things go haywire.

Why Hakyll?

Honestly, because I felt like it. It's Haskell, it seems interesting, so why not. Yeah, yeah, I know it's not really a great reason.

The biggest advantage is that it comes with Pandoc support right out of the box, which lets me be very flexible in how I write my posts/content. In particular, being able to compile Literate Haskell files without needing to jump through any hoops is massive.

Hakyll is also very much "configuration over convention," and the entire site specification is just a normal Haskell program. While the amount of flexibility and customization is a little overkill for what I have right now, it's comforting to know that I'll never have to twist the site out of shape to get it to do something unusual.

Cool, so how is this site working at all?

The 10,000-meter view of this blog is:

  1. I write blog posts in $EDITOR and keep track of them with Git
  2. New posts get pushed up to AWS CodeCommit (aka. AWS' heavily gimped version of GitHub)
  3. I trigger a build of the site using AWS CodeBuild, using a custom-built Docker image with Hakyll installed
  4. Once finished, CodeBuild uploads the site assets to S3
  5. AWS CloudFront and Route 53 handle the arcane magic of connecting the URL in your browser to S3

Most of this should be pretty easy to set up, although there are a few subtleties:

  • The Docker image I use is actually built using a NixOS base image, because I like making things even harder for myself. You probably don't need to do this, and can just build your image off of the Debian-based Haskell images.

    • Regardless of what base image you use, you'll likely want to add a ~/.stack/global-project/stack.yaml to your image with a resolver specified. Use the same resolver as your actual Hakyll project. For example, I have the following lines in my Dockerfile:

      ADD ./stack.yaml /root/.stack/global-project/stack.yaml
      RUN stack build --system-ghc hakyll

      This way, I don't have to spend time compiling Hakyll itself when building the site in CI.

    • If you do use a NixOS base, you'll need to:

      1. Add a new CA Certificate so that you can make HTTP requests to GitHub.

        # Dockerfile
        ADD ./digicert.crt digicert.crt
        RUN apk update
        RUN apk add ca-certificates
        RUN update-ca-certificates digicert.crt
      2. Configure Stack to not do pure Nix builds, because otherwise locale information won't get passed into the build environment, and Hakyll won't be able to render Unicode characters.

        # Dockerfile
        ENV LANG en_US.UTF-8
        ENV LANGUAGE en_US:en
        ENV LC_ALL en_US.UTF-8
        # ~/.stack/config.yaml, within your image
          enable: true
          pure: false
  • You either need to specially configure CloudFront with access to S3 (which I didn't do) or mark the S3 bucket as public access. Unfortunately, there's no way to do this properly without creating an actual JSON policy document; otherwise, everything you upload will still be private, even if it looks like your bucket is public. Thanks, AWS. Here's the one I'm using:

      "Version": "2012-10-17",
      "Statement": [
          "Sid": "AllowPublicRead",
          "Effect": "Allow",
          "Principal": {
            "AWS": "*"
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::${BUCKET_NAME}/${FOLDER}/*"

    Don't forget the wildcard on the resource name.

  • When you create your SSL certificate, don't forget to request both * and!

  • CloudFront caches pages for around a day, so don't expect instant updates. Use the S3 URLs directly if you need to see the results immediately.

  • If you have the choice, don't use CodeCommit and CodeBuild. They're just crappier versions of better options elsewhere, like GitLab. Use those better options instead. For instance, you may wonder why I don't use CodePipeline to automate the deploy process. Well, it doesn't seem like there's a way to get CodePipeline to not override where the site build artifacts go! Lovely.

So yeah, there you have it. A (kind of) working build and deployment system for a Hakyll-based site! Seriously, just go use GitLab or GitHub Pages.

  Next post »

Before you close that tab...