Refactoring with AWS
Introduction:
In previous project we deployed vprofile local stack on our machine and also on AWS cloud, In our previous project we only know how to host our web application stack on aws cloud and we use lift and shift strategy. In this project we will re architecture our services for AWS cloud. This approach to boost agility and business continuity .
Scenario:
Our services are running on physical, virtual or even cloud machines(EC2 instances) and we are having various of services and workloads (application servers, database server, web servers, network services etc) and we require multiple teams to manage these services, so in this scenario there is operational overhead, scaling and uptime struggling, difficult to automate manual process.
Solution:
Instead of using only IAAC on cloud we will use mostly PAAS and SAAS on AWS. so we will not go only for regular EC2 instances on AWS but will use some cloud managed services because it is easy to manage and very convenient to scale.
Frontend :
Backend:
Objectives:
- Very flexible infrastructure
- Pay as you go model
- PAAS
- SAAS
- IAAC
- Low operational overhead
Services Comparison:
Project Architecture:
User will access our URL which will be resolve to an end point from amazon route 53 , the endpoint will be of amazon cloud front (CDN) which will cache so many things to server global audience from there request will be redirected to an application load balancer which is a part of elastic beanstalk , app load balancer will forward a request to an EC2 instance which is an auto scaling group here tomcat app service will be running and these all will be part of elastic beanstalk. We will also have an amazon cloud watch alarm which will be monitoring auto scaling group and will scale out/in based on requirement. There will be S3 bucket where artifacts will be stored, so the entire front end will be managed by beanstalk. For backend we will use amazon MQ instead of memcached, rabbitmq, mysql. Instead of using memcached on EC2 instance we will use elastic cache service and instead of suing mysql on EC2 instance we will use amazon RDS for database.
Flow of Execution:
Step1:
Create key pair for beanstalk instance:
Save the .pem file (private key) once you create the key.
Step2:
Create a security group for backend services.
Create a dummy inbound rule in this SG, allow your public ip to ssh this SG. Because our backend services will be in private network we are not gonna access it from public network.
Create another rule that all services in this group are gonna interact with each other.
Step3:
Next we will create backend services lets go to AWS RDS (relational database service) first.
RDS:
First we will create subnet group in RDS and we will place our RDS instance in that subnet group.
Once subnet group is created go to parameter group (parameter group are setting or conf of your RDS) In RDS we don’t have any login instance like EC2and make conf changes , so RDS gives you this parameter group feature with all the setting you can inject to your RDS system.
Parameter group is created with default settings but if you know database you can customize it as per your requirement.
So now we have security group, subnet group and parameter group now time to create RDS instance (standard create )and inject these settings to that.
Production template gives you multi AZ mean multiple RDS instance one primary instance and second one standby and faster volume but not free, while Dev/Test have single instance and average speed volume but less cost.
Step4:
Next we will create ElasticCache.
AWS ElastiCache is a fully managed, in-memory caching service provided by Amazon Web Services (AWS). It is designed to improve the performance and scalability of your applications by storing frequently accessed data in-memory, reducing the need to query databases or other data sources.
Here parameter, subnet group and elastic cache instance will be created just like we did in RDS.
Parameter group:
Elastic cache instance:
Note: select t2.micro instance if you want to go free tier service.
Step5:
Now create rabbitmq, the service name is AmazonMq.
Select rabbitmq and single instance broker and then set name of amazonmq and select the smallest available instance that is t3.micro here.
Until now our all three backend services are created. One last thing left in the backend which is DB initialization we need to login our RDS instance mysql login create database and deploy our schema.
So lets go to RDS service and copy endpoint of your RDS:
Now launch EC2 instance that will act as a mysql client it will just login to RDS database and initialize it, we will terminate this instance later.
Allow port 22 in security group of this instance:
Add user data script to install mysql client during instance creation so you will not need to install it after instance creation:
Note: If you have centos instance then use mariadb instead of mysql-client.
Now take ssh of mysql-client instance and try to access RDS mysql it will not successful because of we did not create security rule for that.
So create a rule in RDS security group and allow port 3306 from mysql-client security group.
Now we can access RDS through mysql-client.
We have to initialize accounts database with our scheme that we have in our source code so first git clone source database in this instance:
⇒ sudo git clone https://github.com/devops-CloudComputing/Devops_AwsLift-Shift.git
Switch to other branch:
⇒ cd src/main/resources
Here we will have db_backup.sql file, lets initialize database with this scheme:
⇒ mysql -h vprofile-rds-mysql.csz0m8h3t9vj.ap-south-1.rds.amazonaws.com -u admin -pd3e8C8NOtEgxBXlPCTpJ accounts < db_backup.sql
Lets login and make a test:
Step6:
Copy the endpoints and port numbers of your RDS(3306), AmazonMQ (5671)and ElastiCache (11211) we need to store this info in application.properties file in our source code.
Now go to BeanStalk and create beanstalk instance where we will host our application. BeanStalk is a platform it is a ready made platform where we can directly deploy our artifacts and start using it behind the scene it creates so many things like create EC2 instance, load balancer, store artifacts into S3 buckets, manages SG, manage keypairs, all the things it manages as a suite.
Before create a beanstalk first create a IAM role for it as default creates some issues:
Lets comeback to beanstalk and create application:
Application name like that project and environment name like dev, test, production etc. One application can have multiple environments. Domain will be the url of your load balancer.
Do not create RDS through bean stalk because in production environment it is better to create RDS separately for instance if beanstalk is deleted your RDS do not delete.
Rolling updates and requirements:
It is very imp part during beanstalk creation as if we set deployment policy to All at once and whenever we deploy our application it will it will update all servers simultaneously and our users access will be gone for that time, so better to select rolling option instead of all at once and define how many percent of your server will update simultaneously or set fixed numbers. In deployment policy we can also choose rolling with additional batch option mean whenever we go to update it create extra instance at that time so user do not feel lag, another option is immutable it is most expensive one means it create whole new stack update them if everything goes well then deletes old one.
Traffic split:
If we set 10 percent to traffic split it means only ten percent of the total traffic will be routed to new version rest 90 percent will still going to be old version for our define period of time.
Our beanstalk instance is ready we will update it as per our application in upcoming steps.
Step7:
Now we will be doing three things:
1- Enable ACL on S3 bucket
2- update health in TG
3- Update security group.
First we will go to S3 bucket and edit object ownership
And check ACL should be enabled. Or if we disabled it then we can access our bucket through JSON policies.
Now again come in beanstalk application then its environment and find Instant traffic and scaling option click edit and come down to processes option click edit and in health check option change path from / to /login, vprofile app is accessible through url/login and if target group performs the health check on /login it will find healthy.
Once you change the health check path to /login you will find your environment from Ok state to severe because it was just monitoring url now url/login and currently that page is not created yet.
It will work after our deployment.
Now make security group changes:
Go to your instances of vprofile-app-prod → security group → copy the security group id
And in security groups finds your vprofile-backend security group add inbound rule allow all traffic and in source tab paste the copied id of instance security group. We did it because instance of beanstalk will access our backend services.
Step8:
Now time to build our application from source code and deploy to beanstalk environment.
Go to your course code that is cloned earlier and change branch to aws-Refactor .
⇒ cd /src/main/resources and edit application.properties file.
1- replace the database db01 with RDS url and password also.
2- memcached and rabbitmq endpoints:
Now build it go to main directory of project and type:
⇒ mvn install
Now we will upload our created artifact to our beanstalk environment, in previous project we had built it to the tomcat EC2 instance.
Click on upload and deploy button.
Once we upload the artifact it will auto start deploying it if not then go to application click on uploaded application artifact click on action then deploy, wait for sometime until it deployed on all instances in our case 2 instances meanwhile we can check events.
Once deployment is completed on all instance click on env domain here tomcat server login page will appear and our health will also become OK because now its monitoring url/login
Now we will make its dns entry first on any dns hosting like godaddy.com
Before login we need to enable stickiness to our load balancer in same process tab where we change our health path earlier:
Lets login with user admin_vp and same password and test remaining services like memcache and rabbitmq.
Note: If you face any issue in login recreate a the mysql client instance and re initiate scheme with RDS endpoint in src/main/resources directory where we have db.sql file:
⇒ mysql -h vprofile-rds-mysql.csz0m8h3t9vj.ap-south-1.rds.amazonaws.com -u admin -pd3e8C8NOtEgxBXlPCTpJ accounts
So we have verified all the service.
Step9:
Let suppose our instances are places in north virginia zone but our audience is globally to serve users around the world generally content delivery network is used and it can be very expensive because you would require caching servers around many places to handle that area customer’s request. We can use service of cloud called AWS CloudFront. Its is CDN of AWS.
Thats all in this project.