-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Integration of the performance tests feature. The options are going to be (each test will have their own task):
- - Performance Test main menu: The front-end and the basic back-end handler.
- - Load Testing: Determines how a system handles a large number of users or requests at the same time. The goal is to see how the system performs under expected peak load conditions.
- - Latency Testing: Measures the time it takes for a system to respond to a request under varying load conditions.
- - Stress Testing: Tests the system beyond its normal operating capacity to see how it behaves under extreme conditions, such as high traffic or data processing loads.
- - Spike Testing: Determines how the system handles sudden and significant increases in load.
What we might need in the back-end:
- Requests Generator: A module that will simulate user requests to the target system. We will be using AWS EC2 instances to simulate larger loads.
- Metrics Collection: Capture and store various performance metrics, such as response time, throughput, error rate, CPU and memory usage, etc. We will be using Prometheus or Grafana to collect system metrics during tests.
- Result Analyzer: Processes the collected data to generate meaningful insights, like identifying bottlenecks or predicting system limits.
How the tests will be ran (and this is the nice part):
We will be automatically initialize (and kill after use) AWS EC2 instances using Terraform.
Both the Terraform module and the testing scripts will be generated in the backend. We might need another language for the module and script generation. Might as well create an "external" python script for the sake of simplicity.
How to simulate user requests on EC2 instances?
We will load the testing script to the EC2 instance that performs the actual load testing to avoid overloading the base server that is running the application. This could be a simple Node.js script using axios, http, or a dedicated load testing tool like artillery or k6. The script should be configurable to accept parameters (e.g., target URL, concurrency, duration) passed from the backend. Once the EC2 instance is up and running, use a remote command execution tool like ssh (using node-ssh or similar libraries) to start the load testing script on the instance.
Result collection:
While the is running we can stream logs or just the results to our Node.js backend. We can use a simple HTTP server on the EC2 instances to POST results to our backend or use a message queue like AWS SQS.
Otherwise we can store the results in a file on the EC2 instance and retrieve them once the test is completed. I prefer the first option but time will tell.
Data Aggregation:
In case we are using multiple instances, we need to aggregate results in our Node.js backend. This could include metrics like average response time, error rate, throughput and so on. Of course we will be storing the results in the MongoDB database for historical analysis and reporting.
Termination of EC2 instances when no tests are in the queue:
There is a need for continuous monitoring of the performance test queue. When the queue is empty and no tests are running, we need to trigger the termination of the EC2 instance. We could use Terraform to destroy the instances or AWS SDK (e.g., aws-sdk in Node.js) to terminate instances directly.