Skip to content

Project 3 Part 2

AsimAzmi edited this page May 4, 2020 · 45 revisions

Final Problem Statement

Demonstrating use of various capabilities of using service mesh(istio) in our Project part 2.

Problem 1: (Security) Previously the authentication & authorization of end-user's access to our services(application) was done using boilerplate code i.e handled inside a micro-service and also authorization between service to service or pods to pods were not present earlier. In this, there are chances of having security issues like man-in-the-middle attacks and many more and bugs could be easily incorporated in.

Problem 2: Currently, only one version of the application is deployed in the production, suppose, we want to test the application on two different versions and record the user-interaction or response(metrics) at the same time, that part is not yet implemented.

Problem 3: Implementing an efficient rollout policy for the application. If we want to add a new feature or update an existing feature in the production, the application right now needs a complete replacement of the services. This needs some boot time in order to update all the services and it may cause problems if the new version fails in production.

Enhancements from Initial Problem Statement:

Difference 1: We added a new problem faced, that was about load balancing, previously when we did a load testing using JMeter, our system failed for more than 1000,10000 users, this was mainly because of improper traffic management. It was mainly because of an inadequate load balancing mechanism.

Difference 2: Previously, whenever the pods were failing, we had to manually check the logs. What if our system was more complex, it would have been a tedious task to check the system’s health and trace out the issue. Hence we thought of deploying a system like Kiali and Grafana which would give us a dashboard and maintain the health of our services.

Difference 3: Earlier if we wanted to test a change in the production without affecting the end-users, there was no mechanism for one. With service mesh, we can think of mirroring the requests into a second instance that we need to change and make an evaluation for the same.

Methodology, Implementation, Evaluation, Conclusion & Outcomes

Problem 1: Authentication and Authorization :

Istio offers two types of authentication

  1. Peer Authentication
  2. Request Authentication

Peer Authentication : We have implemented Peer Authentication using Mutual TLS which is used for Service to Service authentication. This provides handling the Man-in-the-middle attach on the services. Peer Authentication provides below benefits : i) Provides each service with a strong identity representing its role to enable interoperability across clusters and clouds. ii) Secures service-to-service communication. iii) Provides a key management system to automate key and certificate generation, distribution, and rotation.

Istio has inbuild Mutual TLS authentication. You can select STRICT or PERMISSIVE mTLS. Before enabling mTLS between pods, we should inject Sidecar Injection which creates local envoys of each pods i.e proxy pods. Authentication Process : Istio re-routes the outbound traffic from a client to the client’s local sidecar Envoy. The client side Envoy starts a mutual TLS handshake with the server side Envoy. During the handshake, the client side Envoy also does a secure naming check to verify that the service account presented in the server certificate is authorized to run the target service. The client side Envoy and the server side Envoy establish a mutual TLS connection, and Istio forwards the traffic from the client side Envoy to the server side Envoy. After authorization, the server side Envoy forwards the traffic to the server service through local TCP connections. To Enable Strict or Permissive mTLS checkout to Istio authentication and Authorization of IstioPart3 branch and run below command

kubectl apply -f tls-strict.yaml

To enforce Strict mTLS change mode to STRICT. By default it is PERMISSIVE.

Request Authentication : We have already implemented JWT authorization in our project 2. Request Authentication is used for end-user authentication to verify the credential attached to the request. Istio provides this authentication using custom authentication provider such as Auth0. We tried using the istio with Auth0 but we faced issue and Jetstream platform was not compatible with Auth0.

Check auth-policy.yaml file in same folder mentioned above.

Problem 2 : A/B Testing

The another problem, we identified was, if we have two versions of the application and we are not sure which will increase user interaction, so we tried deploying both versions at the same time and collect metrics based on the end-user metrics. In order to run this, Checkout the folder named -> A/B Testing

kubectl apply -f ab-testing-deployment.yaml

kubectl apply -f abdestinationrule.yaml

We created, two deployments of the UI service, version 1 and version 2. The only difference between two versions was UI background color change on the Sign-in Page

Like: Version 1:

Version 2:

We have achieved the result using Consistent Hash LoadBalancing. We hit the url maintaining two different IP’s and tested for the same (on two different computers by hitting the URL multiple times for each user independently).

To Test A/B testing we tested by passing many header specific to user and tried the curl command:

Here you can see that the REACT rendered main.chunk.js. But if we had have used HTML, CSS for designing front end, depending on the CSS changed for background color we would have got the respective css file changes. But in case of react, react only renders a single JS page. But if you see using UI, you can see that the session is maintained.

Conclusion: Depending on the header passed, that is unique to each user and with the help of consistent hashing we have maintained separate session affinity and from now on we can easily monitor end-users activity and do the analysis for both the versions.

Problem 3 : Canary Deployment

There was no efficient rollout policy for the application. Any small change in the application required a full pipeline to be run and a complete replacement of existing service with the new service. The problem with this approach was that it was risky. If some error occurs in the production environment then it could lead to catastrophe. Hence we needed a mechanism where we can gradually rollout our application to a small percentage of users initially and then depending on the response and monitoring the requests we can increase our target audience for the new service.

Enhancement in the problem statement:

While exploring canary deployments in Istio we got to know that there are also some advanced techniques for canary deployment (Dark Release). These techniques allow us to deploy a new version using Istio and making it available to the audience who fits certain criteria or fulfills certain conditions. UseCase of this enhancement: In the real world normally organizations have staging and production environments. The staging environment is used for all the testing before deploying the application on the production environment. But there is always some catch or some differences in the environments which cause the application to fail in the production environment. This technique helps us to deploy a new release on the production environment and making it available only for special users ( eg. Developers and Testers). Once the developers are sure that the basic functionalities are working then they can go ahead with rolling out to a percentage of audience and gradually increasing the number.

Methodology:

We explored different techniques for this strategy which looked similar but have a different impact. Canary releases suit the best for our needs. We explored the implementation of canary deployment in Kubernetes and Istio. The problem with Kubernetes canary releases is that there is no efficient mechanism to choose what percentage of traffic you want for the new canary deployment. It depends on the number of pods. If you want 10% traffic for version2 and 90% for version one then you'll have to create 10 replicas for the service and Kubernetes will automatically divide the traffic. But this is unnecessary scaling and utilization of resources. Istio provides a very flexible mechanism for weighted traffic hence we chose to go with Istio implementation.

Implementation

Standard Canary Deployment

  • The first step is to create a different version of the service and upload it to the docker hub. We changed the UI service for demo purposes. The change is in the login page where the email id place holder is replaced with login id. (Frontend/src/user/Signin.js)
  • In the standard Kubernetes file we changed the uiDeployment.yaml. We added one more deployment with the new version and attached it to the same UI service.(Frontend/uiDeployment.yaml)
  • We added an Istio specific YAML where we created a new virtual service with destination rules. ( IstioFiles/istio_rules_canary.yaml)
  • We also explored advanced techniques for canary deployment with the imposition of certain conditions.

Dark Release:

  • The implementation was basically header-based routing. If the request has a particular header then Istio will route the request to the new release.
  • We decided to use our backend service "DataAnalysis" to implement dark release. But after failing several attempts we found that Istio does not support "header-propagation". This meant that we have to make changes in our code so that we can save and forward headers through different microservices.
  • We are currently implementing this feature.

Evaluation: For evaluation, you can go to the homepage and refresh the page multiple times to see the change. the URL: http://129.114.104.27:32177/ If you want to change the settings then just change the percentage of weights in istio_rules_canary.yaml and do a "kubectl apply -f istio_rules_canary.yaml". You can directly ssh to 129.114.104.27 go to /home/ubuntu where you can find the file istio_rules_canary.yaml. The public key is available in the root folder of the branch servicemesh_asim.

Note: As per Istio implementation 50:50 ratio does not mean that alternate request will be transmitted to different versions. The implementation is not round-robin but random instead. So it is not guaranteed that ever 2nd request will be split but overall at random, the request is split as per the given ratio.

Conclusion: Istio is reliable and more flexible in the implementation of application rollouts and provides many additional features than the standard Kubernetes.

Implementation of Load Balancing (Enhancement 1)

With the help of Istio configuration, the Istio can automatically detect the services and endpoints in that cluster. The Envoy proxies can direct traffic to relevant services based on Load balancing algorithms like round-robin or using consistent hashing. We have implemented Round-Robin model for the same. In order to test Load Balancing Implementation:

You can go to IstioLoad-Balancing folder on IstioPart3 Branch.

Apply below commands

kubectl apply -f http-gateway.yaml

kubectl apply -f virtualservice-external.yaml

kubectl apply -f loadBalancing-DestinationRule.yaml

Results Evaluated:

We tested throughput on Jmeter for 1000 requests to our Part 2 implementation and we got the below result:

Similarly, we tested throughput on JMeter for 1000 requests after Load Balancing Implementation to our Part 2:

Please find other results here:

https://github.com/airavata-courses/devengers/tree/IstioPart3/IstioLBJmeterScreenShots

Implementation of Istio Features (Enhancement 2)

Kiali and Grafana Analysis for Better Observability:

Kiali URL : http://149.165.171.75:31000/

Uses of Kiali :

  1. Graphically we can see our cluster and all the respective services attached.
  2. Edit the yaml of services and other deployments directly through Kiali UI.
  3. Monitor the Logs of deployments and services.
  4. Finding the points of failure and health check of all the services are easy through Kiali

Images 1: Graph of All connected Services

Image 2 : Services Health

Grafana : UI URL : http://149.165.171.75:31002/ Uses :

  1. Dashboard to query, visualize, alert to understand the users metrics.
  2. Visualize the traffic flow in metrics.

Image : Grafana Dashboard

Contributions:

Common (Asim, Kaustubh, Suyash): Setup of Istio and Configuration of Kiali and Grafana

Git Issues : #79, 80, 81, 82, 83, 84, 89,91, 96

Kaustubh and Suyash : Authentication and Authorization, A/B Testing, Istio Load Balancing Implementation and JMeter Testing

92, 93, 94, 95, 97, 99

Asim : Canary Deployment , Dark Release, Istio Load Balancing Analysis with Kubernetes.

Git Issues: 85, 86, 87, 88, 90,

Test the above using below URL A/B testing : http://149.165.171.75:32177