Cloud Resume Challenge

Introduction

After achieving the AWS Solutions Architect Associate certification, I wanted a way to validate my skills and go into a lower level of understanding how AWS’s services worked. That is where I stumbled upon the Cloud Resume Challenge.

The Cloud Resume Challenge was created by Forrest Brazeal to help people build their cloud engineering skills. I decided to take this challenge to start getting my feet wet in the cloud. The project involves creating a simple website that serves as a resume and deploying it using cloud services (AWS in my case). Rather than giving explicit instructions to follow, Forrest provides a guideline of what the finished product should look like. Some examples include setting up a static website, incorporating dynamic content with serverless functions, utilizing CI/CD pipelines, and securing the infrastructure.


Creating the site

The challenge’s first requirement is to set up a basic website using HTML, CSS to serve as your resume. I chose to go with a template I found and was able to modify the contents to fit my needs. Not only did I include my resume, but I also included a projects/portfolio section for when I complete more projects. Once the HTML and CSS is complete, I started to build out my infrastructure.

One of the challenge’s requirements is to build the infrastructure with IaC (Infrastructure as Code). A common tool used with AWS is CloudFormation, but I decided to use Terraform. Almost all infrastructure in the project was created using Terraform except for a few resources such as accounts.


S3

With AWS, the perfect service for creating static sites would most likely be S3 (Simple Storage Service) because of its scalable and cost-effective nature. The S3 bucket contains a majority of site assets including the HTML, CSS, JavaScript, and photos. While you can directly access the website using S3 itself, this is not ideal as it can sometimes be slow. This is where CDN comes into play.


CloudFront

CloudFront is AWS’s CDN network which I utilized to cache my website content for better performance. Instead of end users making requests directly to the S3 endpoint, CloudFront will cache the site content at the edge location closest to the end user and deliver the contents from that cache. Since I used CloudFront, I turned off all public access to the S3 bucket and allowed only CloudFront access to the S3 bucket.


Route53

I have also purchased my own custom domain through Route53, which will also serve DNS. Now that the domain is purchased, I secured the connection with ACM (AWS Certificate Manager). Purchasing the domain is not enough to link your site. The next step was to create DNS records to route requests going to my domain, to the CloudFront distribution. This is easily done by creating a DNS A Record that points to the CloudFront distribution URL.


Dynamic Content

You may have noticed that on the home page, there is a visitor counter displayed on the bottom left of the site. To get this working required three services: API Gateway, Lambda, and DynamoDB. API Gateway is needed for communication between the frontend and the backend. Using a bit of JavaScript triggers a function that makes the request to the API. Lambda then handles the requests which retrieves and updates information stored in DynamoDB.

API Gateway can be used in three modes: REST, HTTP, and WebSocket APIs. I went with a simple HTTP API as that was sufficient for my needs. My API has been set to accept the POST, OPTIONS, and GET methods while also allowing the content-type header. A route was then set up to route the request to the appropriate service (Lambda), using the API Integration type of AWS Proxy, whenever a POST request is sent. Once the post request is sent to the API, it will trigger the Lambda integration.

The Lambda function was written in Python, which updates the page count data stored in DynamoDB, then returns the updated value. The same JavaScript function mentioned will then take that value and insert it into the site.


CI/CD

Instead of making requests from my computer to AWS, I built a CI/CD Pipeline to automate infrastructure testing and provisioning. Forrest suggested using GitHub Actions, however I opted to use AWS’s CodePipeline. The Pipeline includes 4 stages: Source, Validate, Plan, and Apply.

The source stage uses a CodeStar Connection made to my GitHub repository, which will copy the entire repository to a source artifact. That artifact is then passed along to the Validate stage which will validate the terraform code. Once the validation passes, the plan stage will run, which will run the “terraform plan” command. Finally, once all the checks have passed, the pipeline will apply the infrastructure.


Terraform

As I mentioned earlier, I opted to use Terraform as my choice of IaC instead of CloudFormation or SAM. Before using Terraform, a bit of setup needed to be done before I could start provisioning anything. First, I installed the AWS Command Line for Terraform to provision infrastructure to AWS initially. Once I configured access, I had to create some basic infrastructure to make the pipeline work.

Terraform works by using a statefile which is a JSON format file that is generated to keep track of resources Terraform manages within the environment. This file needs to be treated as confidential since it contains a lot of sensitive information about your environment, meaning it should not be included in GitHub. Instead of including it in GitHub, I created an S3 bucket and DynamoDB table to hold the state information, which also allows CodePipeline to run and provision infrastructure.