Skip to content

Milestone 3

Chirag Galani edited this page Apr 8, 2019 · 21 revisions

Revised architecture diagram and user stories

Please refer Home

Task accomplished in this milestone

  • Containerized each service using Docker
  • Setup Kubernetes Master-Slave integration
  • CI/CD using Apache Jenkins to build, test and deploy containers on Kubernetes Master
  • Performed fault tolerance steps using Kube Monkey (Kubernetes variant of Chaos Monkey)
  • Added live location tracking functionality
  • Setup Cron jobs which call the services periodically
  • Perform load balancing testing on each microservice using JMeter
  • Added ECG, Sleep data and other additional tracking information on the website
  • Revamped the look and feel of the website

Detailed Information

Containerized each microservice service using Docker

All the microservices were containerized using Docker and stored on DockerHub. These containers had all the configuration and secret files setup to ensure plug and play functionality on the Kubernetes service

Setup Kubernetes Master-Slave integration

  • We created a Kubernetes cluster. The master was setup on an Ubuntu 16.04 Quad flavor instance.
  • We also setup 2 slave Kubeternes machines to run the containers in its pods.
  • Kubernetes master handled the deploying of containers and the replicas on the slave nodes.

CI/CD using Apache Jenkins to build, test and deploy containers on Kubernetes Master

Apache Jenkins runs a build whenever there is a new commit pushed to the microservice's branch, where the following steps are performed

  1. Clone the latest code
  2. Install the necessary packages
  3. Test the code
  4. If tests failed, return Failure as Build Status
  5. If tests pass, perform the deployment steps
  6. Generate a new Docker container
  7. Push the container to DockerHub
  8. Deploy the new container on Kubernetes Server
  9. Return Successful Build Status

Performed fault tolerance steps using Kube Monkey (Kubernetes variant of Chaos Monkey)

[AI and peer reviewers please let us know whenever you want to see Kube monkey in action, we will start the service]

The work of Kube-Monkey is to terminate the pods randomly to see the effect of load balancing of the Kubernetes system If we see the Kube monkey logs, we notice that it randomly kills different pods in the whitelisted namespaces at a scheduled time or periodic intervals. Kubernetes master then deploys new containers to ensure that the count of replicas is maintained We can see the pods tables to understand that even before SIGTERM command is executed at the cluster i.e. the pods are killed, the new pod is already building and ready to accept the requests received by the microservice.

Added live location tracking functionality

A python scraper is written which runs every 1-2 mins scraping all the necessary information from the bike computing live tracking website. This is done, as no official API is provided for the developers to consume information. The information is stored on mLab's mongo database which is shown on the website and might be used for research

Setup Cron jobs which call the services periodically

We created cron jobs to save data like fitness, diet, location into the database periodically. This will make sure that updated data is available for users to view. Each microservice has its own cron jobs. Cron jobs are deployed on the separate machine by Jenkins pipeline.

Perform load balancing testing on each microservice using JMeter

We performed load balancing testing on each microservices as well as on frontend. Initially, We created a test plan for 3 instances of the service. We measured the throughput against 5000, 10000, 15000 requests with 10 seconds as ramp-up time.

The throughput/second and error rate for 3 instances of Fitbit Service

Requests Error Rate (%) Throughput(per second)
5000 0.64 185.5
10000 14.3 265.5
15000 26.45 282.8

The throughput/second and error rate for 3 instances of Backend Service

Requests Error Rate (%) Throughput(per second)
5000 0.0 499.2
10000 0.0 991.8
15000 0.1 637.3

The throughput/second and error rate for 3 instances of DropBox Service

Requests Error Rate (%) Throughput(per second)
5000 0 487.0
10000 0 824.3
15000 33.34 995.3

The throughput/second and error rate for 3 instances of Frontend

Requests Error Rate (%) Throughput(per second)
5000 0 310.5
10000 6.65 265.9
15000 29.75 147.2

All graphical results are present here

Response code for second graphs shows that the server process requests most of the time successfully. Server hit per second for almost all services is high. To improve the throughput, we can reduce the external HTTP requests. We can also add cache. From the above data, we can see that backend service and dropbox service perform well. For Fitbit service, we are using third party API which requires to log in every time to gather data. As our UI utilizes some heavy components so its throughput is relatively less.

We could not process more requests because of Java Heap limitation. From all tables, we can see that the system could process around 15000 requests with a 30% error rate.


To improve throughput, we decided to do vertical scaling. We increased the instances from 3 to 5. We performed similar load testing for 5 instances.

The throughput/second and error rate for 5 instances of Backend service

Requests Error Rate (%) Throughput(per second)
5000 0 494.2
10000 0 991.8
15000 0 637.3

The throughput/second and error rate for 5 instances of Fitbit service

Requests Error Rate (%) Throughput(per second)
5000 0.64 223.7
10000 11.9 249.4
15000 29.79 416.3

The throughput/second and error rate for 5 instances of DropBox service

Requests Error Rate (%) Throughput(per second)
5000 0 493.2
10000 36.5 998.7
15000 33.33 995.1

The throughput/second and error rate for 5 instances of Frontend service

Requests Error Rate (%) Throughput(per second)
5000 5.22 97.4
10000 6.26 288.2
15000 42.41 369.0

We observed that by performing vertical scaling throughput increases whereas the error rate decreases. As the number of instances increases, there are more instances available to handle requests correctly.

Added Sleep data and other additional tracking information on the website

We have added live and historical data from multiple data sets and devices to give a holistic view of the biker's journey.

Revamped the look and feel of the website

We have created an Admin-esque feel to the site as well as a friendly interactive viewing portion of the site to provide unique experiences for different types of users.

Revamped UI

To improve user experience we modified the front end by integrating an advanced material theme called Architect.

CI/CD using Jenkins

We created the Docker containers as follow:

  1. Backend Data Fetch Microservice
  2. Backend Dropbox Microservice
  3. React UI
  4. Diet Microservice
  5. Fitness Microservice
  6. Backend Express Server
  7. Kube Monkey
  8. Fitbit Data Tracking Microservice

Improvements from Project 2

  • We worked on the front end to make it more user-friendly. We also improved diet and fitness microservices to increase security.
  • The project need not be manually installed on one machine to be able to work completely and can be accessed from anywhere.
  • We have embraced the distributed systems architecture to enhance the project's scalability and availability.

Steps to run the project

  • Jenkins Server: http://149.165.170.222:8080 (Credentials have been mailed to the peer reviewers and AIs)

  • Check KubeMonkey logs to see different microservices being terminated and check deployment logs to see how the respective pods were killed and new pods were spawned almost immediately.

  • Check Kubernetes Dashboard

Development Branches are as follows:

DevOps

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/dev_ops

Purpose: This branch contains all the different YAML files which were used for the deployment of different microservices including the front-end. It also contains configuration maps and deployment files for Kube Monkey which performs load balancing testing on the system


Backend Server

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/server-dev

Jenkins Pipeline Name: BackEnd_Pipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/BackEnd_Pipeline/

Test URL: http://149.165.168.185:30032/

Purpose: This server is built using the NodeJS Express Framework. It involves fetching data stored on the mongo databases received from the cron jobs. It also facilitates image management from the Media management microservice. This is the only point of contact for the front end of the application to fetch and send data.


Backend Data Microservice

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/server-dev

Jenkins Pipeline Name: BackEnd_Pipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/BackEnd_Pipeline/

Test URL: http://149.165.168.185:30072/

Purpose: This microservice transmits the data from database and sends it to the backend server. It is created to prevent the bottleneck creation on the backend server and avoid single point of failure on backend server.


Backend Dropbox

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/server-dev

Jenkins Pipeline Name: BackEnd_Pipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/BackEnd_Pipeline/

Test URL: http://149.165.168.185:30092/

Purpose: This automatically (via a cron job) pulls in the respective data sets from various devices to be added to the DB:

  • blood pressure data
  • cardio mood data

Fitness Service

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/fitness-service

Jenkins Pipeline Name: FitnessPipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/FitnessPipeline/

Test URL: http://149.165.168.185:30062/get

Purpose: This microservice is developed in Python using the Flask framework. It runs as a cron job which fetches the latest diet data from Strava API. The data includes the outdoor activities performed recently by the user which can be Biking, Running, etc.


Diet Service

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/diet-service

Jenkins Pipeline Name: Dietpipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/Dietpipeline/

Test URL: http://149.165.168.185:30082/add

Purpose: This microservice is developed in Python using the Flask framework. It runs as a cron job which fetches the latest diet data from the MyFitnessPal database. The data includes the number of calories, nutritional information, and food items.


Fitbit Data Tracker

[Praneta] Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/fitbit-service

Jenkins Pipeline Name: Fitbitpipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/Fitbitpipeline/

Test URL: http://149.165.168.185:30052/getsleep http://149.165.168.185:30052/getstat http://149.165.168.185:30052/getheartrate

Purpose: This microservice is used to track sleep data, heart rate data and intraday heart rate from Fitbit app. It is developed in python Flask Framework. It runs as a cron job which fetches the latest data from the Fitbit database.


Front End

Branch URL: https://github.com/airavata-courses/Gateway-Falcons/tree/react-ui

Jenkins Pipeline Name: Front_End_Pipeline

Jenkins Pipeline URL: http://149.165.170.222:8080/job/Front_End_Pipeline/

Test URL: http://149.165.168.185:30042/

Purpose: This is the front end of the application which displays all the integrated output of the application


Deploy Complete Application

We have also built a separate pipeline which can run all the individual pipelines and deploy the complete project on their respective production servers servers

Jenkins Pipeline Name: Deploy_Application

Jenkins Pipeline URL: http://149.165.170.222:8080/job/deploy_application/

Challenges faced

Dockerizing Each microservice was in a different programming language, we had to try different ways to build them to a container and deploy them to container along with the necessary credentials, configuration files and important information.

Kubernetes Setup We faced a lot of issues while setting up the Kubernetes cluster especially while adding slaves to the cluster. We had to start from scratch to get it working.

Kube Monkey We couldn't find enough documentation for Kube Monkey so faced a little difficulty while setting up the kube monkey, analyzing its actions and understanding its working.

Front-end

Learning and eventually customizing layouts as per the clients based on the new theme.

JMeter

In JMeter, we faced a memory issue to execute test plan with a higher number of requests. We modified the heap size to resolve this issue.



Clone this wiki locally