Building a Gitlab CI/CD Pipeline for AWS EKS Cluster Deployment

Building a Gitlab CI/CD Pipeline for AWS EKS Cluster Deployment

Introduction

In this article, I will be writing about how to deploy our sample React application to AWS EKS Kubernetes cluster from our source repository in Gitlab using Gitlab CI.

Pre-requisites

It is good for you to have the following knowledge, but not limited to, to better understand the practical example I am about to perform.

  • Basic knowledge of git commands and remote Git repo ( In this example, Gitlab ).

  • Basic knowledge of Docker file structure and the role of Dockerfile in building your application as a container image.

  • Understanding how the deployment works in a Kubernetes Environment.

  • What is CI/CD and how we can use that to automate our deployment process?

Abstract

In this section, I will explain a little bit about the project and workflow I'm trying to implement. Suppose we have a React application hosted in our Gitlab project repository, we will first create a Dockerfile to containerize that application. Then we will add a CI/CD workflow to our project's repository. For that, we will need a .gitlab-ci.yml file for our CI/CD configuration. In this config file, we will define our first job to build a container image and store that image in our own Gitlab container registry. Then the next job will be about deploying our application as a Kubernetes Helm chart into our AWS EKS cluster.

Demo

Building Docker image

By creating a Docker image on our local system, we can manually test our application code. You must first install the Docker daemon on your local PC. If you have a Mac or Windows computer, I recommend utilizing Docker Desktop because it is simple and convenient. If you're using Linux, you can install Docker by running this handy script.

  • First, I will clone my git repo containing the source application code.

  •         git clone https://gitlab.com/hhw_sharing/gitlab_to_eks
            cd gitlab_to_eks
    
  • Then, I will build a docker image of our source nodejs code.

  •         docker build -t nodejs_test src
    
  • You can test run a docker container instance in your local machine using the below command.

  •         docker run -itd -p 3000:3000 nodejs_test
    
  • You may now use your browser to navigate to http://localhost:3000 and verify that your application is operational (the root path returns a hello world).

  • Now that we've confirmed that our code works successfully in a docker container, we'll set up a CI/CD pipeline to deploy our application as a Kubernetes helm chart in an AWS EKS cluster every time we upload code to our source repository.

Creating a Helm Chart

In our AWS EKS cluster, we will deploy our application as a helm chart. If you don't already know what helm is, here's a quick rundown.

A Helm chart is a collection of templates and values files that define a set of Kubernetes resources to be deployed to a cluster. The templates are written in a markup language called Helm Template Language (HTL), which is similar to Go templates. The values files allow users to configure the templates with custom values.

Helm charts are designed to be versioned and easily installed, upgraded, and deleted using the Helm command-line tool. They provide a convenient way to package, distribute, and manage complex Kubernetes applications.

Using the helm create command, you can build a pre-configured helm chart directory.

Let's have a look at our helm deployment folder to get a better understanding of the architecture we're using. This is how our deployment folder will look:

deployment
└── test-nodejs
    ├── Chart.yaml
    ├── templates
    │   ├── _helpers.tpl
    │   ├── deployment.yaml
    │   ├── hpa.yaml
    │   └── service.yaml
    └── values.yaml

Because this is a sample Helm chart directory, I removed some manifest files that would not be used in our example. This helm chart directory will only include necessary Kubernetes objects such as a deployment and a service, as well as a hpa.yaml file for autoscaling our deployment replicas. For more advanced use cases, there will be a service account for k8s resource creation, an ingress file for domain mapping, and so on.

CI/CD Configuration

Setting up variables

In order to configure the CI/CD pipeline, we will first set up the essential environment variables as well as global variables in our CI/CD config file. We didn't use any environment variables for our build environment in this case. As a result, only global variables will be defined in our gitlab-ci.yml file which should look like this.

variables:
  CONTAINER_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_BRANCH
  NAMESPACE: sample_nodejs
  BUILD_FOLDER: src
  DEPLOYMENT_FOLDER: deployment/test-nodejs
  AGENT_NAME: hhw_sharing/gitlab_to_eks:hhw-test-agent

Setting up a Kubernetes Cluster Agent

You must register an agent in your source repository to access your EKS cluster from your GitLab repository. Then, on your cluster, install the GitLab agent. Gitlab recommends (installation using Helm for non-advanced users.

From the left menu of the GitLab UI, you can register an agent in your repository from Infrastructure -> Kubernetes clusters -> Connect a cluster, give your agent a name of your choice, and then choose the "create agent" button.

You may then install the agent in your cluster by following the steps in the GitLab pop-up menu. If you are experiencing trouble installing, you should also look at this documentation page.

Build job

build-docker:
  image: docker:stable
  services:
  - docker:18.09-dind
  stage: build
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker build --no-cache -t $CONTAINER_IMAGE $BUILD_FOLDER
    - docker push $CONTAINER_IMAGE
  only:
    - main
  • I utilized the Gitlab container registry to store our Docker image in the build job.

  • I logged into the GitLab container registry with the default gitlab CI token, then pushed our built Docker image using the CONTAINER_IMAGE variable I defined above.

Deploy job

deploy:
  stage: deploy
  image:
    name: dtzar/helm-kubectl
    entrypoint: [""]
  before_script:
    - kubectl config get-contexts
    - kubectl config use-context $AGENT_NAME
  script:
    - helm upgrade --install ${NAMESPACE} --namespace ${NAMESPACE} --create-namespace -f ${DEPLOYMENT_FOLDER}/values.yaml --set namespace=$NAMESPACE --set image=${CONTAINER_IMAGE} ${DEPLOYMENT_FOLDER}
  dependencies:
    - build-docker
  only:
    - main
  • In the deploy job, I utilized the dtzar/helm-kubectl base image to perform our job because it already has kubectl and helm installed.

  • Before we run our real command, we first acquire the Kubernetes contexts and then set our previously deployed agent as the current context.

  • Then we create our application using the Helm command line tool helm upgrade --install and set the namespace and image version.

  • Our CI/CD configuration file is finished, however, deploying it in this manner will not work since there is a minor problematic element called imagePullSecrets. The notion is that we'll need to supply some registry credentials to pull image from our GitLab registry.

  • Using the following command, you can create a docker-registry type secret on your Kubernetes cluster:

kubectl create secret docker-registry registry-credentials -n <your-namespace> --docker-server=registry.gitlab.com --docker-username=<your-gitlab-username> --docker-password=<your-gitlab-password>
  • Then you must specify that you want to use your created secret during deployment by updating the imagePullSecrets section in deployment.yml file:
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}

Validation

You may now test your application's functionality by using the port-forwarding method. In a production environment, you may choose to use the load balancer service type or ingress Kubernetes resources. You will most likely require a domain provider as well as a valid record to redirect your domain to the loadbalancer endpoint generated by ingress. So, for the sake of simplicity, I will just test our application by port-forwarding our clusterIP type service in this example.

kubectl port-forward -n <namespace> svc/<service-name> <localport>:<remote-port>
  • You may now test using curl localhost:<localport> or open localhost and port number in your browser.

Summary

In this article, I demonstrated how to deploy our sample nodejs application as a Docker container and a Kubernetes Helm chart. I also demonstrated how to set up a CI/CD pipeline in GitLab to update our deployment whenever there is a change in our source code repository.

Reference

For code and CI/CD setup, you can refer to my public Gitlab repository here.