Loopback Load Tester is a small localhost-only load testing tool.
It sends bounded high-volume GET traffic to another API already running on the same machine so you can observe latency, throughput, errors, and breaking points under pressure.
This repository is v1.
Loopback Load Tester simulates:
- many concurrent
GETrequests from a single local machine - increasing pressure on a specific local API route
- the behavior of an API under moderate to heavy request volume
It does not simulate:
- a real distributed attack
- packet floods or network-layer attacks
- botnets or multi-origin traffic
- bypass techniques
- remote targets on the internet
This tool is intentionally restricted.
- The target host must be
127.0.0.1,localhost, or::1 - Any non-local target is rejected before the test starts
- The tool performs a preflight request before the load test
- The tool only sends
GETrequests in v1
The goal is defensive observation on infrastructure you control locally.
- localhost-only target validation
- simple
.envconfiguration - preflight health check before starting the run
- latency, throughput, status code, and error reporting
- works well for observing Dockerized local APIs such as NestJS, Express, FastAPI, and similar services
- Node.js 18+
- npm
npm install
cp .env.example .env- Start the local API you want to observe.
- Check that the route responds with
curl. - Configure
.env. - Run:
npm run attackEdit .env:
TARGET_HOST=127.0.0.1
TARGET_PORT=3000
TARGET_PATH=/api
CONNECTIONS=20
DURATION_SECONDS=10TARGET_HOST: must stay on127.0.0.1,localhost, or::1TARGET_PORT: local API portTARGET_PATH: route to test, query string included if neededCONNECTIONS: number of concurrent connectionsDURATION_SECONDS: duration of the run
- Start the local API you want to observe.
- Choose one route to test.
- Verify that the route works with
curl. - Put the target route in
.env. - Run the load test.
Example:
curl http://127.0.0.1:3000/your-routeThen set:
TARGET_HOST=127.0.0.1
TARGET_PORT=3000
TARGET_PATH=/your-route
CONNECTIONS=25
DURATION_SECONDS=15And run:
npm run attackUse this for a first safe observation:
CONNECTIONS=10
DURATION_SECONDS=10Use this to observe moderate stress:
CONNECTIONS=25
DURATION_SECONDS=15Use this to observe stronger pressure:
CONNECTIONS=50
DURATION_SECONDS=15Use this only if you want to find the breaking point of a local route you control:
CONNECTIONS=100
DURATION_SECONDS=15The tool prints:
- latency percentiles
- request rate (
Req/Sec) - bytes read per second
- counts for
2xx,3xx,4xx, and5xx - client-side errors and timeouts
- If latency goes up but responses stay successful, the API is degrading but still serving traffic.
- If
5xxstarts appearing, the application is failing under pressure. - If timeouts appear, the route is no longer keeping up with incoming concurrency.
- If only one route breaks while others stay stable, that route is likely your hotspot.
Open separate terminals while the test is running:
docker logs -f <your-api-container>
docker stats <your-api-container> <your-db-container>If your API uses PostgreSQL:
docker exec -it <your-db-container> psql -U <db-user> -d <db-name> -c "
select state, count(*)
from pg_stat_activity
where datname = '<db-name>'
group by state
order by state;
"Before sending load, the tool checks that the target route is reachable.
If the API is down or the port/path is wrong, you get a clear message such as:
Preflight failed: http://127.0.0.1:3000/api is unreachable. Start the local API first or check TARGET_PORT/TARGET_PATH.
GETonly- no custom headers
- no request body support
- no authentication helper
- no scenario file
- no distributed workers
- localhost only by design
The tool only targets loopback addresses.
That is the core safety rule of the project and the reason for the name.
This project is released under the MIT License. See LICENSE.