Automate Deplyoment to AWS

  1. GitHub Repository Link

  2. Install Terraform

  3. Install AWS CLI

  4. Gitlab Account

  5. AWS IAM Account with necessary permission

» What we are doing?

In this article, we’ll cover:

  • How to set up your Terraform configuration files.
  • How to configure GitLab CI/CD for automated deployments.
  • How to securely manage your AWS credentials.
  • How to create and deploy a VPC and an EC2 instance automatically.
  • How to validate, plan, and apply your Terraform configurations using GitLab CI/CD.

But before we moving ahead, make sure you have everything you need to follow along. Here’s what you’ll need:

  • An AWS Account

  • Terraform Installed

  • GitLab Account

  • AWS Credentials: We’ll be using AWS access and secret keys to authenticate Terraform and GitLab CI/CD with AWS. Make sure you have your AWS Access Key ID and Secret Access Key ready.

So, let’s get started!

» Secret And Access Key

Log in to AWS account.

Go to IAM dashboard.

Then head to Quick Links section and click on My security credentials

Scroll down and in Access keys section click: Create access key.

And then download the csv file or copy and save your keys somewhere safe.

» Configuring AWS.

We have to set up the necessary AWS credentials and permissions to ensure our Terraform and GitLab CI/CD pipeline can interact with AWS securely.

So for that you will be needing the Access and Secret Key.

Once you have your Secret And Access Key keys, Open your terminal.

Run the command aws configure.

And then Enter the Access Key ID, Secret Access Key, default region name (e.g., us-east-1) and Default output format set as json.

aws configure

aws-configure.png

» Set Up Terraform project and its Configuration Files.

In terminal create a new directory and navigate inside it and create two folders named VPC and WEB and four files named main.tf and provider.tf backend.tf and variables.tf and open it in vs code.

If you get confused anywhere, Look at Github Repository. Link is given above.

mkdir terraform-cicd 
cd terraform-cicd 
touch main.tf 
touch backend.tf 
touch variables.tf
touch provider.tf

mkdir vpc
cd vpc
touch main.tf
touch variables.tf
touch outputs.tf

mkdir web 
cd web 
touch main.tf
touch variables.tf

Create the VPC module:

// vpc/main.tf file
# Create a VPC
resource "aws_vpc" "myvpc" {
  cidr_block = var.vpc_cidr_block 
  enable_dns_support = true
  enable_dns_hostnames = true

  tags = {
    Name = "myvpc"
  }
}

# Create a public subnet
resource "aws_subnet" "public_subnet" {
  vpc_id                  = aws_vpc.myvpc.id
  cidr_block              = var.public_subnet_cidr_block
  map_public_ip_on_launch = true
  availability_zone       = var.availability_zone

  tags = {
    Name = "public_subnet"
  }
}

# Security Group
resource "aws_security_group" "public_sg" {
  vpc_id      = aws_vpc.myvpc.id
  name        = "public_sg"
  description = "Public Security Group"

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = var.security_group_ingress_cidr
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = var.security_group_egress_cidr
  }
}
// vpc/outputs.tf file
output "public_subnet_id" {
  description = "The ID of the public subnet"
  value       = aws_subnet.public_subnet.id
}

output "public_security_group_id" {
  description = "The ID of the public security group"
  value       = aws_security_group.public_sg.id
}
// vpc/variables.tf file
variable "vpc_cidr_block" {
  description = "The CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "public_subnet_cidr_block" {
  description = "The CIDR block for the public subnet"
  type        = string
  default     = "10.0.1.0/24"
}

variable "availability_zone" {
  description = "The availability zone for the public subnet"
  type        = string
  default     = "us-east-1a"
}

variable "security_group_ingress_cidr" {
  description = "The CIDR block for ingress traffic to the security group"
  type        = list(string)
  default     = ["0.0.0.0/0"]
}

variable "security_group_egress_cidr" {
  description = "The CIDR block for egress traffic from the security group"
  type        = list(string)
  default     = ["0.0.0.0/0"]
}

Create the web module (EC2 instance):

// web/main.tf
# EC2 instance
resource "aws_instance" "server" {
    ami             = "<replace-with-ami-id>"  
    instance_type   = "t2.micro"
    subnet_id       = var.subnet_id
    security_groups = [var.security_group_id]

    tags = {
        Name = "myserver"
    }
}

// variables/main.tf
variable "subnet_id" {
  description = "The ID of the subnet where the instance will be deployed"
  type        = string
}

variable "security_group_id" {
  description = "The ID of the security group for the instance"
  type        = string
}

Now we will join everything together by calling the VPC and web modules in the main.tf file present in the root.

//  Root main.tf

module "vpc" {
    source = "./vpc"
}

module "ec2" {
    source            = "./web"
    subnet_id         = module.vpc.public_subnet_id
    security_group_id = module.vpc.public_security_group_id
}

Then we have the provider.tf file that will specifies the AWS provider configuration.

// Root provider.tf
provider "aws" {
  region = "us-east-1"
}
// Root variables.tf
variable "region" { 
  type = string
  default = "us-east-1"
}

And now the last thing that we need to add a .gitignore file for terraform because we don’t want our local terraform directories or state file to be pushed with our code. So open the given link below and copy all of the code and paste into the .gitignore file.

https://github.com/github/gitignore/blob/main/Terraform.gitignore

» Test Terraform code

Now let’s test our terraform code before pushing to the gitlab.

Open terminal and let’s run the command aws configure. If your aws is not configured; please do it. Provide the access and the secret key. And then run

terraform init

This will initialise the modules we created and you will have .terraform folder generated. And we will also have a message terraform is initialzed successfully.

Now we will validate our code for it run:

terraform validate 

And we should get the message: configuration is valid

Now let’s check what are the changes required by our configuration: there should be four: VPC, subnet, security group and EC2 instance should be created.

So run the command:

terraform plan

And we should have plan: 4 0 changes, 0 destroy message.

And then to create and update our infrastructure depending on the configuration files run the command:

terraform apply -auto-approve

To apply the changes without having the need to manually type yes we use -auto-approve.

And head to EC2 instances on AWS website, go to running instances and there you should have one instance running with the name: myserver.

Perfect.

Let’s destroy everything for now by running the command:

terraform destroy --auto-approve

» S3 and Dynamo Table

Now we need to create S3 bucket and a dynamo table for managing Terraform state.

You can either manually create these by visiting your aws account or run these commands in terminal:

For S3 run:

aws s3api create-bucket --bucket <replace-with-bucket-name> --region us-east-1

Then Ensure versioning is enabled on the bucket for better state management:

aws s3api put-bucket-versioning --bucket my-terraform-cicd-bucket --versioning-confi

For Dynamo Table run:

aws dynamodb create-table \
    --table-name <replace-with-table-name> \
    --attribute-definitions AttributeName=LockID,AttributeType=S \
    --key-schema AttributeName=LockID,KeyType=HASH \
    --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

NOTE: Make sure to replace the <replace-with-bucket-name> and <replace-with-table-name> with the name that you want.

» Set up S3 state backend.

backend.tf file(present in root dir) is a crucial file in our Terraform configuration. This file sets up remote state management using an S3 bucket and a DynamoDB table.

The backend.tf file configures Terraform to use a remote backend for storing the state file. The state file should never be stored on your machine so that is why we do this.

terraform {
  backend "s3" {
    bucket = "<replace-with-bucket-name>"
    key    = "state"
    region = "us-east-1"
    dynamodb_table = "<replace-with-table-name>"
  }
}

After this we need to initialise the terraform again, so open the terminal and run the command:

terraform init

And we should get a message that says: backend initialised successfully.

And that’s all. Now we need to automate all of this so we will start off by Pushing code to gitlab.

» Push code to gitlab

Login to Gitlab Account.

Create a GitLab Repository.

Click on the ‘New Project’ button and Select ‘Create blank project’.

Then Enter a project name, anything that you want.

And at last Click on ‘Create project’.

Next, we need to push our Terraform configuration files to this newly created GitLab repository.

For that head to VS code and open the terminal and now we need to Initialize a new Git repository in our project directory

So run the command:

git init

Add GitLab repository as a remote:

git remote add origin <your-repository-url>

Now we don’t want our code to be pushed to our main branch instead we will create a new branch and name it as dev

So for that run:

git checkout -b dev-env

And you should get the message that says: switched to new branch dev.

Now let’s Add all your files to the repository and commit:

git add .
git commit -m "Initial commit"

Push the files to GitLab:

git push -u origin dev

And then it will ask for your username and password so provide that. And head over to gitlab website.

And there it will ask you to create the merge request.

So write the title and describe it and then create merge request.

Once done, we have to approve it.

» Configure GitLab CI/CD Variables

To securely manage our AWS credentials, we need to add them as environment variables in GitLab:

  1. In your GitLab project, go to ‘Settings’ and then in ‘CI/CD’.
  2. Expand the ‘Variables’ section.
  3. Add the following variables:
    • MY_AWS_ACCESS_KEY: Your AWS Access Key ID
    • MY_AWS_SECRET_KEY: Your AWS Secret Access Key

» CI/CD pipeline

Now let’s set Up our GitLab CI/CD Pipeline which is going to automate the deployment of our infrastructure to AWS. So Let’s get started with CI/CD.

Create a .gitlab-ci.yml file in you repository. Add the following:

image:
    name: registry.gitlab.com/gitlab-org/gitlab-build-images:terraform
    entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

variables:
    AWS_ACCESS_KEY_ID: ${MY_AWS_ACCESS_KEY}
    AWS_SECRET_KEY: ${MY_AWS_SECRET_KEY}
    AWS_DEFAULT_REGION: "us-east-1"

before_script:
    - terraform --version
    - terraform init

stages:
    - validate
    - plan
    - apply
    - destroy


validate:
    stage: validate
    script:
        - terraform validate

plan:
    stage: plan
    dependencies:
        - validate
    script:
        - terraform plan -out="planfile"
    artifacts:
        paths:
            - planfile

apply:
    stage: apply
    dependencies:
        - plan
    script:
        - terraform apply -input=false "planfile"
    when: manual

destroy:
    stage: destroy
    script:
        - terraform destroy -auto-approve
    when: manual

» Run the Pipeline

With everything set up once you commit the changes CI/CD pipeline will start running the jobs.

And that’s it! We’ve successfully set up a GitLab CI/CD pipeline to automate the deployment of our infrastructure to AWS using Terraform.


And that’s all for this blog. I hope you enjoyed it and found it useful. Also if you have any doubts please drop a message on our discord server. The link can be found below.

👉 And I’ll see you in next one, till then bye-bye and take care.