-
Notifications
You must be signed in to change notification settings - Fork 2
Load Testing using 1 replica
RutujaJadhav19 edited this page Mar 8, 2022
·
8 revisions
- Baseline 3 worker nodes of 16GB each assigned on
Jetstream
- 1 replica-pod spawned for each microservice using
kubeadm
.
- Regular sustained load of 1200 requests is handled well for a period of 30 minutes.
- Load balancing ensures great throughput and each pod is being utilized optimally.
- No error rate observed of 0%.
- Overall, average-throughput is approximately 15 req/sec.
- The system handles 1200 requests comfortably, when we have 1 replica set up for each microservice.
- The requests aren't handled at the same time, rather there's some kind of sequential processing behavior.
- One potential improvement for throughput could be to exploit async-await functionality for concurrent requests at the
Gateway
.
- Our system can handle a maximum of 1900 requests in a span of 60 secs with an acceptable error rate of 0.04% and a throughput of 15.87 requests/sec.
- Our system can handle a maximum of 1900 requests in a span of 60 secs with an acceptable error rate of 0.04% and a throughput of 15.87 requests/sec.
- We observed that our system was able to handle a load of 1900 requests/minute for a span 60 secs with 0.04% error rate. When we increased the load, we were getting a significant error rate.
- From the graphs, we can infer that our python API and registration API are taking maximum time to execute requests. For python API, we have used in-built libraries to implement our logic that is causing significant delays in processing requests. The registration API involved writing a chinook of user information to the database which is time consuming. Also, the synchronous behavior of REST API’s implemented for inter microservice communication is causing delays in our system and limiting the capacity of our system.
Next up: Testing with 3 replicas (Link)