Deployment of a Netflix-Style Application on Kubernetes Using DevSecOps Practises:

Ghazanfar Ali
13 min readOct 29, 2023

--

Introduction:

We will start with creating an EC2 instance and deploy our application locally using docker container once the application is running we are going to integrate security using sonarqube and trivy, Once it is done manually we will automate this process using jenkins CI and upload the image on docker hub. Then will integrate monitoring using prometheus and grafana for better visualization. On the basis of jenkins job success or failure we will get email notification using SMTP. Once we automate this whole process we will deploy the application on kubernetes using ArgoCD the GitOps tool which uses for CD process which tracks all the changes and records of deployment also. We will also have kubernetes monitoring using by using prometheus and grafana using helm charts.

Lets start the project!

First we will install t2.large ubuntu based instance for jenkins, sonarqube and trivy.

Note: Use the 25gb storage for this instance.

Also attach the elastic ip with with instance:

Access the newly created instance and update the packages:

Once update is complete clone the below mentioned github repo in the instance:

We need to install docker our instance as we will create docker image from Dockerfile which is present in the cloned directory:

Lets create the docker image:

Now we will create container form this image:

Allow port 8082 in SG of instance as we are running this application on 8082 then check the status:

It is not showing any movies here because we have to integrate our application with movies database with their API so that our application gets movies from there:

Create your account on https://themoviesdb.org/login

In setting tab find the API option and request for new key then copy the API key:

Now we will create new docker image with the help of this API key, But first stop the previous container:

Now create new image again:

Now run the container from this new image:

Now check it:

Now lets do the security part, First we will install sonarqube and trivy:

sonarqube:

Trivy:

Trivy is the most popular open source security scanner, reliable, fast, and easy to use. Use Trivy to find vulnerabilities & IaC misconfigurations, SBOM discovery, Cloud scanning, Kubernetes security risks,and more.

⇒ sudo apt-get install wget apt-transport-https gnupg lsb-release

⇒ wget -qO — https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -

⇒ echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a ⇒/etc/apt/sources.list.d/trivy.list

⇒ sudo apt-get update

⇒ sudo apt-get install trivy

Now we will scan the filesystem using trivy:

It also found the known CVE in our repository:

We can also use trivy to find vulnerabilities in docker images:

It did not find any vulnerability in the image:

Now we will move toward automation by CICD we will use jenkins for this, install the jenkins on our instance, once jenkins is installed access it on port 8080 but first allow the port 8080 in security group of jenkins:

Add few plugins in jenkins:

Once plugin added go to tools section and add these tools there:

1- JDK:

2- NodeJs

Now configure the sonarqube:

Administration → security → users → update token → generate new token:

Now copy this token and add in jenkins credentials tab:

Now come to manage jenkins → systems then search for sonarqube and define the sonarqube location there:

Add the sonar-scanner tool too because we will use it in the pipeline:

Our pipeline is ready to deploy our application, so create new pipeline:

Now copy the jenkins script till stage “install dependencies” from github repo and paste in jenkins:

At stage sonarqube analysis there is project keyname “netflix” which we will add on sonarqube:

Then click on locally then generate:

Now build the pipeline:

Meanwhile pipeline is executing we can install more plugins in our pipeline:

1- OWASP Dependency check

2- Docker

3- Docker commons

4- Docker pipeline

5- Docker API

6- Docker-build-step

We will create docker image through jenkins and also add or push it on docker hub using jenkins so we will add docker hub credentials on jenkins:

Also add the owasp dependency check in tool section of jenkins:

Also add docker:

Now delete the docker container that we run on our instance because we will run the container through pipeline:

Now paste the full pipleline

that also create docker image push it on docker hub, check the dependency check through OWASP and use trivy for docker image.

Note: Do not forget to put the TMDB API key in the pipeline, add the jenkins user in docker group and replace the docker hub image name with your docker_hub name mine is “markhor1995”.

Now we will add monitoring which is the next part of our Ops section, create new t2.medium instance for monitoring purpose:

We will install prometheus and grafana as a monitoring tool:

First, create a dedicated Linux user for Prometheus and download Prometheus:

⇒ sudo useradd — system — no-create-home — shell /bin/false prometheus

⇒ wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz

Extract Prometheus files, move them, and create directories:

⇒ tar -xvf prometheus-2.47.1.linux-amd64.tar.gz

⇒ cd prometheus-2.47.1.linux-amd64/

⇒ sudo mkdir -p /data /etc/prometheus

⇒ sudo mv prometheus promtool /usr/local/bin/

⇒ sudo mv consoles/ console_libraries/ /etc/prometheus/

⇒ sudo mv prometheus.yml /etc/prometheus/prometheus.yml

Set ownership for directories:

⇒ sudo chown -R prometheus:prometheus /etc/prometheus/ /data/

Create a systemd unit configuration file for Prometheus:

⇒ sudo nano /etc/systemd/system/prometheus.service

Add the following content to the prometheus.service file:

[Unit]

Description=Prometheus

Wants=network-online.target

After=network-online.target

StartLimitIntervalSec=500

StartLimitBurst=5

[Service]

User=prometheus

Group=prometheus

Type=simple

Restart=on-failure

RestartSec=5s

ExecStart=/usr/local/bin/prometheus \

— config.file=/etc/prometheus/prometheus.yml \

— storage.tsdb.path=/data \

— web.console.templates=/etc/prometheus/consoles \

— web.console.libraries=/etc/prometheus/console_libraries \

— web.listen-address=0.0.0.0:9090 \

— web.enable-lifecycle

[Install]

WantedBy=multi-user.target

Enable and start Prometheus:

⇒ sudo systemctl enable prometheus

⇒ sudo systemctl start prometheus

Verify Prometheus’s status:

⇒ sudo systemctl status prometheus

Prometheus runs on port 9090:

Installing Node exporter:

Node Exporter is a software component used in the Prometheus monitoring and alerting ecosystem. It plays a crucial role in collecting various system-level metrics and statistics from a target machine and makes them available for scraping by Prometheus.

Create a system user for Node Exporter and download Node Exporter:

⇒ sudo useradd — system — no-create-home — shell /bin/false node_exporter

⇒ wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz

Extract Node Exporter files, move the binary, and clean up:

⇒ tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

⇒sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/

⇒ rm -rf node_exporter*

Create a systemd unit configuration file for Node Exporter:

⇒ sudo nano /etc/systemd/system/node_exporter.service

Add the following content to the node_exporter.service file:

[Unit]

Description=Node Exporter

Wants=network-online.target

After=network-online.target

StartLimitIntervalSec=500

StartLimitBurst=5

[Service]

User=node_exporter

Group=node_exporter

Type=simple

Restart=on-failure

RestartSec=5s

ExecStart=/usr/local/bin/node_exporter — collector.logind

[Install]

WantedBy=multi-user.target

Enable and start Node Exporter:

⇒ sudo systemctl enable node_exporter

⇒ sudo systemctl start node_exporter

Verify the Node Exporter’s status:

⇒ sudo systemctl status node_exporter

Configure Prometheus Plugin Integration:

Integrate Jenkins with Prometheus to monitor the CI/CD pipeline.

Prometheus Configuration:

To configure Prometheus to scrape metrics from Node Exporter and Jenkins, you need to modify the prometheus.yml file. Here is an example prometheus.yml configuration for your setup:

⇒ sudo nano /etc/prometheus/prometheus.yml

Reload the Prometheus configuration without restarting:

⇒ curl -X POST http://localhost:9090/-/reload

Now we can see jenkins and node exporters in prometheus target:

Now we will install grafana to get better monitoring visualization which is taken by prometheus:

Step 1: Install Dependencies:

First, ensure that all necessary dependencies are installed:

⇒ sudo apt-get update

⇒ sudo apt-get install -y apt-transport-https software-properties-common

Step 2: Add the GPG Key:

Add the GPG key for Grafana:

⇒ wget -q -O — https://packages.grafana.com/gpg.key | sudo apt-key add -

Step 3: Add Grafana Repository:

Add the repository for Grafana stable releases:

⇒ echo “deb https://packages.grafana.com/oss/deb stable main” | sudo tee -a /etc/apt/sources.list.d/grafana.list

Step 4: Update and Install Grafana:

Update the package list and install Grafana:

⇒ sudo apt-get update

⇒ sudo apt-get -y install grafana

Step 5: Enable and Start Grafana Service:

To automatically start Grafana after a reboot, enable the service:

⇒ sudo systemctl enable grafana-server

Then, start Grafana:

⇒ sudo systemctl start grafana-server

Step 6: Check Grafana Status:

Verify the status of the Grafana service to ensure it’s running correctly:

⇒ sudo systemctl status grafana-server

Step 7: Access Grafana Web Interface:

Open a web browser and navigate to Grafana using your server’s IP address. The default port for Grafana is 3000. For example:

http://<your-server-ip>:3000

Default user and pass is admin

In data source section choose prometheus as a data source for grafana:

Now we will import dashboard to display visualization on grafana use 1860 is that is for node exporter:

Then select prometheus datasource and click on import then you will start seeing the data visualization:

To monitor the jenkins in grafana we will install plugin the jenkins:

Now you will see jenkins is also up in target of prometheus before it was down:

Lets add a dashboard for jenkins with id 9964 at grafana:

Now we will add email notification, first make sure 2fa is enable in your google account:

Then search for app passwords:

Once you create it will give you a password, copy this password we will use it in

jenkins to send us the notifications:

Now come in the system options and define the gmail smtp there:

Then in jenkins→ manage → credentials section create the new credentials with gmail email id and password that google shared with us:

Use this in extended email notification section at systems option:

Add a new stage in pipeline so that i sends the notification on our email:

Now build the pipeline:

It also created and push docker image on docker hub:

Also received the email notification:

Application is also running in docker container:

Now we will start argoCD part first install the argoCD it will help to keep the record in continuous deployment also, we will use minikube as kubernetes cluster you can also use AWS EKS for k8s cluster:

⇒ kubectl create namespace argocd

⇒ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

We will use helm chart for defining, installing and upgrading application in kubernetes cluster:

First install the helm through script:

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3

$ chmod 700 get_helm.sh

$ ./get_helm.sh

Monitor Kubernetes with Prometheus

Prometheus is a powerful monitoring and alerting toolkit, and you’ll use it to monitor your Kubernetes cluster. Additionally, you’ll install the node exporter using Helm to collect metrics from your cluster nodes.

Install Node Exporter using Helm
To begin monitoring your Kubernetes cluster, you’ll install the Prometheus Node Exporter. This component allows you to collect system-level metrics from your cluster nodes. Here are the steps to install the Node Exporter using Helm:

  1. Add the Prometheus Community Helm repository:
  2. ⇒ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
  3. Create a Kubernetes namespace for the Node Exporter:
  4. ⇒ kubectl create namespace prometheus-node-exporter
  5. Install the Node Exporter using Helm:
  6. ⇒ helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter — namespace prometheus-node-exporter

This will install prometheus to monitor kuberntees through prometheus the same we are monitoring our jenkins server.

Now we will edit argoCD service file to change from cluster ip to nodeport type:

Now again check the service status it will show the http and https port also find the minikube cluster ip and access the argocd server on browser:

Get the password of argocd through secret file and decode it using base64, default username is admin:

Now connect the repo to argocd so that it get the application and deploy on K8s cluster:

Now create new app on argocd:

Click on create app it will start syncing the application:

It have deployed the application on minikube cluster:

Now we can access the application through nodeport and cluster ip:

Our node exporter is running on 9100 port so we will create this job name in prometheus.yaml file so that it start the cluster monitoring also:

Check the validity of the configuration file:

⇒ promtool check config /etc/prometheus/prometheus.yml

Reload the Prometheus configuration without restarting:

⇒ curl -X POST http://localhost:9090/-/reload

Now you can see the kubernetes cluster as well in the prometheus monitoring:

Thats all in this DevSecOps project.

--

--

Ghazanfar Ali

I write technical blogs on DevOps and AWS, Azure and GCP