-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Summary
Create a standalone CLI tool which generates synthetic JSON-RPC request traffic against a given RPC server to measure capacity of the RPC server under realistic user-request volume conditions.
Related to: Epic for Client Load Testing: #2
Depends on: Spike to determine how much build vs oss - #8
This tool should follow a two-part approach using two commands:
- capture necessary seed data from target network with a
generatecommand runcommand generates synthetic request traffic from seed data for the read-only JSON-RPC endpoints (excludingsendTransactionandsimulateTransaction).
Read-Only Endpoints to Support
Based on analysis of the RPC server implementation, the following read-only endpoints should be supported:
- getHealth - Health check endpoint
- getNetwork - Network configuration info
- getVersionInfo - Version information
- getLatestLedger - Latest ledger metadata
- getLedgers - Paginated ledger retrieval
- getLedgerEntries - Ledger state queries by key
- getTransaction - Single transaction lookup by hash
- getTransactions - Paginated transaction retrieval
- getEvents - Contract event queries with filtering
- getFeeStats - Fee distribution statistics
Transactional Endpoints to Support
- sendTransaction - Tx Sub
- simulateTransaction - simulate contract invocations
CLI Load Testing Tool Details:
- Use existing
github.com/stellar/go-stellar-sdk/clients/rpcclient - Implement worker pool pattern for concurrent request execution
- Output metrics to console during runtime and JSON at end of test run.
CLI Commands
Command 1: generate - First phase to collect data from already ingested network data present in rpc and capture this data in an extract file be used as input for client load testing rpc request generation later.
stellar-rpc-loadtest generate \
--rpc-url <RPC_URL> \
--network-passphrase <NETWORK_PASSPHRASE> \
--output <PATH_TO_REQUEST_DATA_DAT> \
[--ledger-window <NUM_LEDGERS>]Collects:
- Recent transaction hashes (successful and failed)
- Valid ledger keys (accounts, contracts, contract data, trustlines)
- Active contract IDs
- Contract event topics
- ledger sequences
Command 2: run - Execute load test requests
stellar-rpc-loadtest run \
--rpc-url <RPC_URL> \
--network-passphrase <NETWORK_PASSPHRASE> \
--config <PATH_TO_CONFIG_TOML> \
--duration <DURATION> \
--ramp-up <RAMP_UP_DURATION> \
[--bootstrap-data <PATH_TO_REQUEST_DATA_DAT>] CLI Load Tool Request Configuration
Per-endpoint request settings are configured via TOML file. Example configuration:
# Endpoint-specific request rates (requests per second)
[endpoints]
getHealth.rps = 5.0
getNetwork.rps = 0.1
getVersionInfo.rps = 0.1
getLatestLedger.rps = 20.0
getFeeStats.rps = 1.0
getLedgers.rps = 2.0
getLedgerEntries.rps = 10.0
getTransaction.rps = 5.0
getTransactions.rps = 2.0
getEvents.rps = 8.0
sendTransaction.rps=2.0
simulateTransaction.rps=2.0CLI Request Generation strategy per Endpoint.
Before submitting a request, does the endpoint require data knowledge or state specifics?
1. getHealth
- None required
2. getNetwork
- None required
3. getVersionInfo
- None required
4. getLatestLedger
- None required
5. getFeeStats
- None required
6. getLedgers
- Request sequences with varied
startLedgervalues frombootstrap.datledger sequences - Use different pagination limits (1-100)
- Mix of cursor based pagination
- Include both
format: "xdr"andformat: "json"requests
7. getLedgerEntries
- Use ledger keys from
bootstrap.dat(accounts, contract data, contract code, trustlines) - Vary key count (1-200 keys per request)
- Use realistic key distributions (80% single key, 15% 2-10 keys, 5% bulk 50-200)
- Support both XDR and JSON format requests
8. getTransaction
- Use transaction hashes from
bootstrap.dat - Mix of recent successful, recent failed, and random (not found) hashes
- Ratio: 70% found, 30% not found
- Support both XDR and JSON format requests
9. getTransactions
- Query ranges based on ledger sequences from
bootstrap.dat - Use varied pagination parameters (limit 1-200)
- Mix of cursor based pagination
- Cursor-based pagination through ledger ranges
- Support both XDR and JSON format requests
10. getEvents
- Use contract IDs and event topics from
bootstrap.dat - Generate varied filter combinations: single contract, multiple contracts, event type filters, topic filters
- Use realistic ledger ranges (recent 100-10000 ledgers)
- Vary pagination limits (1-100)
- Mix of cursor based pagination
11. sendTransaction
- Construct and sign classic and soroban transactions using configuration settings from
loadtest_config.toml - doesn't require seeded request data from bootstrap.dat from
generatephase, request payload can be built duringrun - Support both transaction types based on
classic_ratioconfiguration:
12. simulateTransaction
- doesn't require seeded request data from bootstrap.dat from
generatephase, request payload can be built duringrun - Mix of valid transactions:
- 85% of requests should use well-formed valid transaction
- 10% of requests should use transaction with invalid parameters
- 5% of requests should use transaction with insufficient resources
- Support both XDR and JSON format requests
- At init time, pre-generate the XDR for three transactions: (1)well-formed, (2)invalid, and (3)insufficient resource
- Transaction envelopes built with single InvokeHostFn op to invoke contract 'transfer' function
- Can use zero contract ID since simulation doesn't validate on-chain existence
- Can use zero account ID for account to/from references
Acceptance Criteria
- CLI tool collects realistic seed data from live network and saves to bootstrap file
- CLI tool generates synthetic request load for all 10 read-only endpoints
- Real-time metrics data points rendered in-place on console output during load test execution
- Final metrics summary saved to JSON file at end of load test
Output Metrics
The following metrics will be emitted during and after the load test:
rpc_loadtest_requests_total{endpoint="<name>"}- Total request count per endpointrpc_loadtest_requests_success{endpoint="<name>"}- Successful request count per endpointrpc_loadtest_requests_errors{endpoint="<name>",error_type="<type>"}- Error count by endpoint and error typerpc_loadtest_response_time_seconds{endpoint="<name>",quantile="0.5"}- Response time p50rpc_loadtest_response_time_seconds{endpoint="<name>",quantile="0.9"}- Response time p90rpc_loadtest_response_time_seconds{endpoint="<name>",quantile="0.95"}- Response time p95rpc_loadtest_response_time_seconds{endpoint="<name>",quantile="0.99"}- Response time p99rpc_loadtest_throughput_rps{endpoint="<name>"}- Actual achieved requests per second per endpoint
Parent Epic: #2
** Note, the order of tasks is serial, would need to be #8 -> #6 -> #3 -> #11 -> #12
** AI tools were used to support the preparation of this analysis.
Sub-issues
Metadata
Metadata
Assignees
Labels
Type
Projects
Status