Streamlining CI/CD with GitHub Actions: Provision and Deploy Infrastructure Seamlessly Using Terraform

AzeematRaji - Oct 17 - - Dev Community

Terraform:
Terraform is an open-source Infrastructure as Code (IaC) tool used to automate and manage cloud infrastructure. It allows you to define infrastructure resources (like servers, databases, and networks) in declarative configuration files and then provision them consistently across different environments.

Kubernetes:
Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. It helps ensure high availability, scalability, and efficient resource utilization across clusters of machines.

GitHub Actions:
GitHub Actions is a CI/CD platform that allows you to automate workflows directly within your GitHub repository. It enables tasks like building, testing, and deploying code whenever specific events occur (e.g., push or pull requests). You can define workflows using YAML files to handle continuous integration and deployment pipelines.

These tools work well together to streamline infrastructure management and application deployment.

Objectives

  1. Improve deployment scalability and reliability: Use Kubernetes to ensure that applications can scale automatically.

  2. Streamline CI/CD pipeline: Create a seamless CI/CD pipeline using GitHub Actions to automate infrastructure and application deployments efficiently, reducing manual intervention.

  3. Using terraform:

  • Ensure a fully automated, consistent, reproducible, and reliable infrastructure deployment across various environments.

  • Leveraging its unified workflow and lifecycle management features to ensure easy updates and scaling.

Prerequisites

Step by step guide
lets run our code manually before we attempt using CI/CD workflow.

  • Configure your AWS credentials so that the AWS CLI can authenticate and interact with your AWS account
aws configure
Enter fullscreen mode Exit fullscreen mode
  • Create a terraform diretory and cd into it
mkdir terraform
cd terraform
Enter fullscreen mode Exit fullscreen mode
  • Create terraform configuration files

providers.tf contains all the providers needed for this project

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }

    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "2.30.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = var.region
}

Enter fullscreen mode Exit fullscreen mode

variables.tf defines input variables

variable region {}
variable cluster_name {}
variable vpc_name {}
variable vpc_cidr_block {}
variable private_subnet_cidr_blocks {}
variable public_subnet_cidr_blocks {}
Enter fullscreen mode Exit fullscreen mode

main.tf contains;

  • Modules for VPC and EKS Cluster: creates a Virtual Private Cloud (VPC) and an Amazon EKS cluster, allowing for simpler and more concise code.

  • Kubernetes Provider: defines the Kubernetes provider for Terraform to deploy Kubernetes resources on the created cluster, all within a single .tf file, this ensures unified workflow and full life cycle management

  • Configure the Kubernetes Provider: Using cloud specific plugins, the exec plugin is utilized in the Kubernetes provider block to handle authentication, as the AWS token expires every 15 minutes.

# Filter out local zones, which are not currently supported 
# with managed node groups

data "aws_availability_zones" "available" {
  filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}


module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.13.0"

  name = var.vpc_name

  cidr = var.vpc_cidr_block
  azs  = slice(data.aws_availability_zones.available.names, 0, 3)

  private_subnets = var.private_subnet_cidr_blocks
  public_subnets  = var.public_subnet_cidr_blocks

  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true

  # This enables automatic public IP assignment for instances in public subnets
  map_public_ip_on_launch = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }
}

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.24.2"

  cluster_name    = var.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true


  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.public_subnets

  # Additional security group rules
  node_security_group_additional_rules = {
    allow_all_traffic = {
      description                  = "Allow traffic from the internet"
      protocol                     = "-1"  # Allow all protocols
      from_port                    = 0
      to_port                      = 65535  # Allow all ports
      cidr_blocks                  = ["0.0.0.0/0"]  # Allow from anywhere
      type                         = "ingress"
    }
  }  


  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t2.micro"]

      min_size     = 1
      max_size     = 2
      desired_size = 1
    }

    two = {
      name = "node-group-2"

      instance_types = ["t2.micro"]

      min_size     = 1
      max_size     = 2
      desired_size = 1
    }
  }
}



# Retrieve EKS cluster information and ensure data source waits for cluster to be created

data "aws_eks_cluster" "myApp-cluster" {
  name = module.eks.cluster_name
  depends_on = [module.eks]
}

data "aws_eks_cluster_auth" "myApp-cluster" {
  name = module.eks.cluster_name
}

#Kubernetes provider for Terraform to connect with AWS EKS Cluster

provider "kubernetes" {

  host                   = data.aws_eks_cluster.myApp-cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.myApp-cluster.certificate_authority[0].data)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    args        = ["eks", "get-token", "--cluster-name", var.cluster_name]
    command     = "aws"
  }
}

#Kubernetes resources in Terraform

resource "kubernetes_namespace" "terraform-k8s" {

  metadata {
    name = "terraform-k8s"
  }
}

resource "kubernetes_deployment" "nginx" {

  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace.terraform-k8s.metadata[0].name

  }

  spec {
    replicas = 2
    selector {
      match_labels = {
        app = "nginx"
      }
    }

    template {
      metadata {
        labels = {
          app = "nginx"
        }
      }

      spec {
        container {
          name  = "nginx"
          image = "nginx:1.21.6"

          port {
            container_port = 80
          }

          resources {
            limits = {
              cpu    = "0.5"
              memory = "512Mi"
            }
            requests = {
              cpu    = "250m"
              memory = "50Mi"
            }
          }
        }
      }
    }
  }
}



resource "kubernetes_service" "nginx" {

  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace.terraform-k8s.metadata[0].name
  }

  spec {
    selector = {
      app = kubernetes_deployment.nginx.spec[0].template[0].metadata[0].labels.app
    }

    port {
      port        = 80
      target_port = 80
    }

    type = "LoadBalancer"
  }
}
Enter fullscreen mode Exit fullscreen mode

outputs.tf define output values that are displayed after the deployment completes

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane"
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = module.eks.cluster_name
}

output "nginx_load_balancer_ip" {
  description = "output Load Balancer IP to access from browser"
  value = kubernetes_service.nginx.status[0].load_balancer[0].ingress[0].ip
}
Enter fullscreen mode Exit fullscreen mode

terraform.tfvars file assigns values to the input variables defined in variables.tf. This file contains sensitive information, such as AWS credentials or other configuration settings, and should not be exposed publicly, such as by pushing it to GitHub. Instead, it's best practice to store it as a secret variable in your repository's settings to ensure that sensitive information is kept secure.

  • Run the terraform command

terraform init initializes the repository, adding all the dependencies required.

terraform plan plan the changes to be added or removed, essentially a preview of what terraform apply will do, allowing you to review and confirm

terraofrm apply --auto-approve apply without prompt

  • Confirm cluster is created and application is properly deployed on it, this can be done in two ways:

aws console - check if cluster is running, nodes are healthy, check the loadbalancer for ip or dns name, open via browser, nginx app is successfully displayed
nginx running

using kubectl - install kubectl, configure the kubectl to interact with the cluster aws eks --region us-east-1 update-kubeconfig --name <cluster_name>. can run kubectl commands to get deployment, service, pods and others kubectl get all -n <namespace>

  • Run terraform destroy --auto-approve we can safely destroy after confirm the configuration files are good.

  • CI/CD workflow - to run this fully automated, create .github/workflows directory and two yaml files in it, one is execute our terraform apply and the other to destroy when done.

terraform.yaml _ save your access key, secret key and region in your repository secret variable_

# This is a basic workflow to help you deploy nginx on EKS using terraform

name: Terraform EKS Deployment

# Controls when the workflow will run
on:
  # Triggers the workflow on push events but only for the "main" branch
  push:
    branches: [ "main" ]

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains multiple jobs terraform,
  terraform:
    name: Deploy EKS Cluster
    # The type of runner that the job will run on
    runs-on: ubuntu-latest

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
      # Step 1: Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - name: Checkout Code
        uses: actions/checkout@v4

      # Step 2: Setup AWS Credentials
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      # Step 3: Setup Terraform
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.9.6

      # Step 4: Terraform Init
      - name: Terraform Init
        run: terraform init

      # Step 5: Terraform Plan
      - name: Terraform Plan
        run: terraform plan

      # Step 6: Terraform Apply
      - name: Terraform Apply
        id: apply
        run: terraform apply -auto-approve

Enter fullscreen mode Exit fullscreen mode

terraform-destroy.yml it's essential to have access to the .tfstate file to ensure that you can properly delete the resources you created, such as the cluster. Instead of pushing your .tfstate to a repository (which is not secure), you should store it remotely. One recommended option is using an S3 bucket as the remote backend for your Terraform state.

To do this, you can create a backend.tf file that specifies the S3 bucket as the remote backend for storing the .tfstate file.

terraform {
  backend "s3" {
    bucket = "terraform-statefile-backup-storage"
    key    = "eks-cluster/terraform.tfstate"
    region = "us-east-1"
  }
}
Enter fullscreen mode Exit fullscreen mode

terraform-destroy.yml set to run manually instead of running on push

name: Terraform Destroy EKS Cluster

on:
  workflow_dispatch: # Manually triggered workflow

jobs:
  terraform:
    name: Destroy EKS Cluster
    runs-on: ubuntu-latest

    steps:
      - name: Checkout Code
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v2
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{ secrets.AWS_REGION }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v2
        with:
          terraform_version: 1.9.6

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan Destroy
        run: terraform plan -destroy

      - name: Terraform Destroy
        run: terraform destroy -auto-approve
Enter fullscreen mode Exit fullscreen mode
  • Push to Github, and the terraform.yml will run on push

successful terraform workflow

  • Confirm by accessing your loadbalancer IP or dns name via browser

nginx running

  • Manually run the workflow of terraform-destroy.yml

successful terraform-destroy workflow

Conclusion

  • Time-Saving: Automating the deployment process saves time and effort, making it easier to manage infrastructure.

  • Consistency: Automation leads to more reliable deployments, reducing mistakes and ensuring everything works as expected.

  • Scalability: Automated workflows can easily grow with your project, allowing for faster updates without losing quality.

  • Better Teamwork: Integrating tools like Terraform and Kubernetes with GitHub Actions helps team members collaborate more effectively.

  • Flexibility: A well-defined CI/CD pipeline allows teams to quickly adjust to changes, improving overall project speed and adaptability.

Check out my Github

.
Terabox Video Player