Continuous Delivery For Docker Containers:
Scenario:
In this project we will be building continuous docker images and deploying on kubernetes cluster. First we will understand when such projects are implemented, Today mostly applications are designed on micro-service architecture and you will have images for that, developer are continuously making changes in codes to make it better and better so we need to streamline these changes, with continuous changes there should be continuous build and test process, regular build of containers, regular deployment request to Ops team. Ops team will be using any container orchestration tool to deploy these container images.
Issue with Current situation:
With continuous changes in code Ops team will have continuous deployment request, manual deployment process creates dependency and of course it also time consuming.
Solution:
- So primarily we need to automate the build and release process of container images
- Deployment of these images should be as fast as their build. Whenever develops make any code commit we should auto create its image and deploy in the orchestration environment like kubernetes.
Tools To Be Used in This Project:
- Kubernetes ⇒ As containers orchestration tool.
- Docker ⇒ To build and test out docker images.
- Jenkins ⇒ As CI/CD Server.
- Docker Hub ⇒ To host docker images.
- Helm ⇒ For packaging and deploying our images to kubernetes cluster.
- Git ⇒ As version Control System.
- Maven ⇒ To build our java code.
- SonarQube ⇒ For code analysis.
Architectural Design Of the Project:
Flow of Execution:
Lets start!
First we will create t2.small instance with bash script for jenkins server.
Allow port 8080 in security group of jenkins server.
Second we will create sonarqube server (t2.medium) instance with bash script, Allow port 80 in security group of sonarqube instance.
Lastly we need t2.micro instance for KOPS.
Below is the document to install KOPS:
https://kubernetes.io/docs/setup/production-environment/tools/kops/
Github Link:
https://github.com/devops-CloudComputing/Continuous_Delivery_For_Docker_Containers
Open the sonarqube server with instance public ip and generate new token there:
Install the sonar qube plugin on jenkins:
Copy that token go to jenkins server using port 8080 then manage jenkins→ system → and add sonarqube server there:
Also all traffic of jenkins to sonarqube security group and vice versa.
In jenkins file we have a stage to push our image to docker hub, so get docker hub credentials and store that details in jenkins as done for sonarqube:
Go to manage jenkins→credentials→add credentials:
Next step is to install docker engine in jenkins server:
Follow the steps to install docker on ubuntu.
https://docs.docker.com/engine/install/ubuntu/
The reason to install docker in jenkins server is that we will run docker build command from jenkins server. For this add jenkins user in docker group, then reboot the instance once:
Lets install few more plugins in jenkins:
After this login to KOPS instance, from here we will create kubernetes cluster:
We also need s3 bucket to maintain kops state:
Now we will create IAM user with admin access and access keys for AWSCLI:
Create a hosted zone in route 53:
Now we will add ns servers entries in our purchased domain registrar like godaddy:
Now login to ec2 instance and setup everything:
Generate ssh keys on instance also install awscli on it.
Once install awscli then configure it with iam usr access keys:
Once kops and kubectl installed check the domain:
Now we can run kops command to create kubernetes cluster:
⇒ kops create cluster — name=kubepro.mydevopsstar.com — state=s3://vprofile0-kops-state — zones=us-east-1a,us-east-1b — node-count=2 — node-size=t3.small — master-size=t3.medium — dns-zone=kubepro.mydevopsstar.com — node-volume-size=8 — master-volume-size=8
This command will create conf file for kubernetes cluster and store in s3 bucket.
⇒ kops update cluster — name kubepro.mydevopsstar.com — state=s3://vprofile0-kops-state — yes — admin
After few minutes around 15 minutes run command to validate the cluster:
⇒ kops validate cluster — state=s3://vprofile0-kops-state
It has created ec2 instances one for master node and two worker nodes:
Now install helm in this same KOPS instance:
We will run KOPS instance as slave to jenkins Through helm we can package all the definition files through project stack and can also deploy to kubernetes cluster. Helm charts are actually combination of our all definition files.
We have installed helm, now we will set our repo with all the data and create helm chart.
First clone the git repo in your KOPS instance then switch to branch cicd-kubernetes:
⇒ Git clone https://github.com/devops-CloudComputing/Continuous_Delivery_For_Docker_Containers.git
⇒ git checkout cicd-kubernetes
There is directory helm which have helmcharts already defined.
Below are our all kubernetes definition files:
In vproappdep.yaml file change the static docker image name to variable so that we will pass this value when we run helmchart command:
Now time to test these helmcharts:
Create new namespace for this in kubernetes cluster:
You will see helm have created services and pods in kubernetes cluster in test name space:
Complete stack deployment with single command “helm install”.
As we have test helmchart for now just delete it:
Create new namespace that will be use by jenkins:
Our jenkins file also present in same repo change as per your need like docker hub repo name and docker hub credentials that created on jenkins:
As we will run helm command from KOPS so we will add EC2 instance as slave to jenkins:
Install openjdk in kops instance:
Create new directory in /opt:
Change directory ownership from root to ubuntu user as jenkins will use this user:
Jenkins master will keep this directory as their workspace directory and we will use ssh from jenkins to login to this kops instance.
Allow port 22 in KOPS ec2 instance in source define jenkins server security group so that jenkins will take ssh of this instance:
Then come to jenkins and add kops as a slave:
Manage jenkins→ nodes and clouds → new node:
For credentials choose ssh with key and copy kops private key which is use to take ssh of kops and paste in the key section:
Add sonar scanner tool in jenkins:
Lets test it by creating new job on jenkins:
When we will build the pipeline it will give error at sonarqube state as we have to add webhook in sonarqube and put jenkins details there so that sonarqube will be able to communicate/ share status/notification to jenkins:
Then it will successfully build the pipeline:
It also have pushed the images on docker hub:
New pod also created in our kubernetes cluster:
Now copy the load balancer url and paste on browser to access our application:
So our entire application stack deployed successfully by using helm and jenkins.
Now we will configure a build trigger in jenkins so that whenever a developer make any code commit it will auto trigger the jenkins pipeline:
We set cron job to every minute so that it will check for changes for every minute.
Now whenever any code commit happens it will trigger the pipeline and save the docker image at docker hub also deployed on kubernetes.
Thats all in this project.