Skip to content

Latest commit

 

History

History
273 lines (171 loc) · 13.1 KB

README.md

File metadata and controls

273 lines (171 loc) · 13.1 KB

Vulcan

About

Repository for the course project for CSCI-B 649 Applied Distributed Systems in Spring 2022. This project is based on a multi-user system with a distributed architecture for consuming and processing weather data in various forms, logging its usage and presenting the weather information to the user in a web-based interface.

The project is in its initial stages of development and the README will be updated as new features are added.

Project 1

Architecture Diagram

Screen Shot 2022-03-04 at 11 31 24 PM

Napkin Diagram

image

Technology stack

The microservices from the architecture above is built with the following tech stack

  • Java 17
  • SpringBoot
  • Maven
  • Lombok
  • JPA
  • Python 3
  • FastAPI
  • Express
  • NodeJS
  • MongoDB
  • PostgresSQL
  • Docker

Pre-requisites

  • Docker

Setup

The application is containerized and is hosted in docker registry. To setup the application run the docker-compose file.

Sample GIF

Project 2

Design Plan

image

Kubernetes

  • Minikube cluster is used to deploy scale each of the microservices. The application was tested for various amounts of load with Apache Jmeter.

Pre-requisites

  • Minikube cluster with kubectl installed and configured to use your cluster
  • Docker cli installed, you must be signed into your Docker Hub account

Setup

  • Navigate to this folder and run the following command deploy the application in the Kubernetes cluster

              kubectl apply -f kubernetes/. --recursive
    
  • To manually scale the application to 3 replicas run the following kube-scale.sh script file with -r as 3

              bash kube-scale.sh -r 3
    

Jmeter

Jmeter Test details and Report

Project 3

Architecture

project-3-architecture

  • The REST based communication between the services from project 2 is replaced with Kafka based message streaming.

Continuous Integration / Continuous Deployment

project-3-ci-cd-pipeline

  • Github Actions were used for Continuous integration and deployment.
  • On pushing the code to Github the Github workflow is triggered to build and push the image to registry and simultaneously a rolling update is made on the Kubernetes cluster in the remote server.

Branch setup for the CI/CD pipeline

project-3-branch-pipeline

  • Each service has a feature and a development branch.
  • The initial development for the service is done in the feature branch and on completion it is pushed to the develop branch
  • A push to the develop branch triggers a workflow where the Github actions are implemented to build, create image and push the image to docker registry.
  • The individual service development branches are merged to the develop branch and from there on push to the main branch an ssh connection is made to the remote server where the Kubernetes cluster is hosted and a rolling update is made.
  • All the credentials and sensitive data are stored in the Github secrets.

Launching Jetstream Instances with Ansible and K8S setup

Plan of action:

  • Using terraform and OpenStack CLI we will create the instances. (master and worker nodes).
  • Using Ansible playbooks we will connect to the above instances and run the playbooks to setup kubernetes.
  • Finally connect to the master node and run Kubernetes yaml files for starting deployments and services.

Steps:

We need an instance to set up Ansible first. This can be a local or a remote machine. We chose a remote machine with Ubuntu image. Login to the machine and open the terminal

  1. Create SSH-key.
sudo su
ssh-keygen

This will generate public and private keys. we will use this key later to SSH connect to the instances we create.

  1. Copy the openrc.sh file to this instance. You can follow the steps in this Link to download the openrc file. And then Run the following command,
source <openrc.sh>

This will just save the environment variables to our localhost which we need to connect to the OpenStack python client.

  1. Make sure you have a public IP available. This can be created in the exosphere environment or running the following command. Ignore this step if you have an public IP address available.
pip3 install python-openstackclient
openstack floating ip create public
  1. Paste the IP address you created in the first line below and run the following code. We need to modify certain attributes from the cluster.tfvars file
  • master node IP address (k8s_master_fips)
  • availability zones(az_list)
  • external network(external_net)
  • number of nodes (number_of_k8s_nodes)

Note: The shell script below only modifies the IP address and the rest of the above attributes are already set based on the cloud environment provided to us.

  export IP=149.165.152.125
  bash instance-creation.sh

What does the instance-creation.sh shell script do?

  • It will install terraform, ansible.
  • Initiate the terraform scripts for creation of the VM instances.
  • Setup SSH as an agent to connect to the remote machines
  • Run ansible playbooks to install kubernetes.
  1. Kubernetes is installed successfully in our cluster( at the end of the above script you will be logged into the master node) and all the necessary deployment files are copied to the master node.
cd deploy
bash deploy.sh
  1. To test the setup, you can ssh to the master machine and check the nodes connected. (Optional)
ssh ubuntu@$IP
sudu su
kubectl get nodes

REFERENCE

  • The ansible implementation to setup jetstream instance and Kubernetes Cluster is adapted from Team-terra 's implementation

Project 4

Case Study - Apache Airavata Custos

About:

Custos is a software framework that provides common security operations for science gateways, including user identity and access management, gateway tenant profile management, resource secrets management, and groups and sharing management. It is implemented with scalable microservice architecture that can provide highly available, fault-tolerant operations. Custos exposes these services through a language-independent API that encapsulates science gateway usage scenarios.

For this project the Custos framework was initially deployed on Jetstream instances and later stress and load testing was done based on the testing strategies mentioned in the course lecture.

Setup:

Jetstream-1:

  1. Create a ssh key pair in your local machine and copy the public key to the Jetstream settings (https://use.jetstream-cloud.org/application/settings). So that when a new instance is created your local machine’s public key will be registered for authorized keys.

  2. Spawn five Jetstream - 1 Ubuntu 20.04 LTS machines of medium size and with the following configurations

image

  1. Among the five instances one is used for deployment purpose, one to setup rancher and three others are used for Kubernetes master and worker nodes.

image

Rancher:

  1. download the cloudman folder from this link (https://airavata.slack.com/files/U030JR7JXDF/ F03CA28HZ6J/cloudman.zip)
  2. Open the sample ini configuration file from cloudman/cloudman-boot/inventory and include the following changes. • Update the agent and controllers to the domain name of your Jetstream 1 instance used for rancher setup (In this case the rancher instance ip is 149.165.168.199 and the domain name is js-168-199.jetstream-cloud.org). • Update ansible user to your xsede username. • Enter the location to your private key in your local machine.

image

  1. Follow step tree to four from team terra's

Kubernetes - Rancher

  1. Follow this link to setup K8S with Rancher.

Deployment of cert-manager, keycloak, consul, vault, mysql and custos was adapted from the procedure from the following link (https://github.com/airavata-courses/DSDummies/wiki/Project-4)

Python-Client

Load Testing Custos Endpoints

  • Initially, we did our testing on a custos dev environment deployed on custos.scigap.org.
  • Later, the same tests was done on our own custos deployment to compare the test results.

Some of the load testing scenarios and the results obtained are shown below

• Register Users Endpoint

We load tested the register user endpoint of custos using two different loads

o 100 concurrent requests (threads) o 500 concurrent requests (threads)

image

In this case, we saw that the average response time was around 19ms for each register requests. And, Custos had a throughput of 164.998 requests/min

After this we incremented the load to 500 requests. Results are below

image

In this case, we saw that the average response time was around 91ms for each register requests. And, Custos had a throughput of 162.618 requests/min which is same as when the load was 100 requests.

• Create_group Endpoint

We load tested the create group endpoint of custos using two different loads –

o 100 concurrent requests (threads) o 500 concurrent requests (threads)

image

In this case, we saw that the average response time was around 14ms for each create group requests. And, Custos had a throughput of 221.435 requests/min.

image

In this case, we saw that the average response time was around 93ms for each create group requests. And, Custos had a throughput of 147.226 requests/min. So, as the load was increased the response time increased, and the throughput also reduced. But, it was easily able to handle 500 requests.

Testing on our own custos deployment

custos_host: js-156-79.jetstream-cloud.org custos_port: 30367

For analyzing and comparing the results from both the deployments we did the 500 requests test for both the register user and create group endpoints to see the difference.

500 requests test for register endpoint

image

In our custos deployment, we got an average response time of 100ms which is similar to the dev custos deployment, The throughput was a little more at 157.252 requests/min.

Challenges:

• We tried to deploy custos was initially on Jetstream 2 for a couple of times and it failed. Later we had to switch to Jetstream 1 to setup the instance

Reference:

  1. The current rancher setup for Jetstream 1 was referred from team terra's implementation on Jetstream-2.
  2. The keycloak, cert-manager, vault, consul, vault was adapted from team DSDummies wiki page and thanks to team Garuda for helping us with fixing the issues with the setup.