Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
80b8a36
Create scripts to batch single txs (#1134)
chrispalaskas Jan 14, 2026
c09f28c
Performance/memepool metrics (#1135)
ladamesny Jan 14, 2026
880f32e
benchmarking: fixed memepool script
ladamesny Jan 15, 2026
30f3f42
REmove inmemory caching for submitting, it's unneccesary
Jan 15, 2026
90a2ed9
Merge branch 'performance-benchmarking' of https://github.com/input-o…
Jan 15, 2026
52d41df
benchmarking: update download script to only download logs if they do…
ladamesny Jan 15, 2026
8f064d2
Signed-off-by: chrispalaskas <[email protected]>
Jan 15, 2026
711a179
Merge branch 'performance-benchmarking' of https://github.com/input-o…
Jan 15, 2026
fc43263
benchmarking: sort downloaded logs
ladamesny Jan 15, 2026
483492f
Add jupyter scripts to plot block creation times and block import times
Jan 15, 2026
8868233
Improve plotting
Jan 15, 2026
638f74f
Add a distinct tx counter for jupyter
Jan 15, 2026
d90b15e
Log # invalid txs not sent
Jan 16, 2026
fb7ba64
benchmarking: updated download script time range argument - reorganiz…
ladamesny Jan 20, 2026
17317ce
Add valid/invalid tx count
Jan 20, 2026
1ab27d3
benchmarking: convert est to utc
ladamesny Jan 20, 2026
b95d8e4
Update jupyter
Jan 20, 2026
1ed59c2
Unify jupyter scripts
Jan 20, 2026
b1d38ad
Update requirements.txt
Jan 20, 2026
a8af7b9
Generate report jupyter
Jan 20, 2026
44750a5
Cleanup: Restructure folders
Jan 21, 2026
d80c2c6
bnchmarking: add memepool metrics to jupyter
ladamesny Jan 21, 2026
26d66db
benchmarking: renamed utils to scripts
ladamesny Jan 21, 2026
456b9ae
benchmarking: block size analysis script
ladamesny Jan 21, 2026
fbd9553
Parameterize the funding seeds for dust registration and funding
Jan 21, 2026
da8cfb2
Merge branch 'performance-benchmarking' of https://github.com/input-o…
Jan 21, 2026
1882fe2
Remove number of relays from max thread count
Jan 21, 2026
bbf89ed
Multi threaded for get_balances
Jan 21, 2026
80dca53
Make sure we watch the register and fund txs
Jan 21, 2026
f5455ad
benchmarking: add block size analysis to jupyter
ladamesny Jan 21, 2026
e90929e
Mre branch 'performance-benchmarking' of github.com:input-output-hk/p…
ladamesny Jan 21, 2026
239a788
Add verbose argument to register dust script
Jan 22, 2026
be2f16c
Fix fund_wallets for 1:1 seed/dest relationship
Jan 22, 2026
3607c9d
Funding amount cmd line arg
Jan 22, 2026
a584ae7
benchmarking: move the plotting/graphing of block sizes to the notebook
ladamesny Jan 22, 2026
d7ba18f
Mrge branch 'performance-benchmarking' of github.com:input-output-hk/…
ladamesny Jan 22, 2026
9902333
benchmarking: updated the memepool benchmark analysis to include repo…
ladamesny Jan 22, 2026
43f59cd
benchmarking: updated the memepool benchmark analysis
ladamesny Jan 23, 2026
b612693
benchmarking: add section for forks
ladamesny Jan 23, 2026
15b5a24
benchmarking: add section for forks
ladamesny Jan 23, 2026
023326a
Add node url argument (for localhost)
Jan 23, 2026
612d9d6
Only max attempts to retry
Jan 23, 2026
cc176a7
Add failed seeds log
Jan 23, 2026
df9711f
Use only 90% of available threads for execution
Jan 23, 2026
81a3d41
add a time delay after each tx
Jan 23, 2026
f1ea49a
Save list of failed seeds
Jan 23, 2026
fef45f3
print 0 if empty
Jan 24, 2026
a2de4a7
Keep failed seeds log for register and fund
Jan 26, 2026
2340ca9
Fetch chain first, then generate txs
Jan 26, 2026
c62c20e
Randomize sleep time for send batch txs
Jan 26, 2026
554ec71
Add indices vs start/end for get balances scripts
Jan 26, 2026
70572f9
ferdie vs localhost for default
Jan 26, 2026
f85cda2
benchmarking: add setup guide for scripts
ladamesny Jan 26, 2026
a2acf64
Print empty balances list
Jan 26, 2026
5bdf7f9
Merge branch 'performance-benchmarking' of https://github.com/input-o…
Jan 26, 2026
d1c4841
benchmarking: enhanced block size analaysis script
ladamesny Jan 27, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -46,5 +46,7 @@ dev/local-environment-dynamic/configurations/partner-chains-nodes/*

ogmios_client.log

# e2e tests (Python)
venv
**/venv/

# Benchmark reports
e2e-tests/utils/benchmarks/block_size_benchmarks/block_size_analysis/block_size_analysis_*/
23 changes: 21 additions & 2 deletions e2e-tests/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,28 @@
**/__pycache__/
.decrypted~*
*.decrypted
venv/
**/venv/
contractlog.json
reports
logs/
.vscode/settings.json
my_env/
my_env/

# Toolkit database
toolkit.db

# Mempool benchmark results
utils/mempool_benchmarks/results/

# Block size benchmark logs
utils/block_size_benchmarks/logs/
utils/benchmarks/block_size_benchmarks/block_size_analysis/block_size_analysis_*

# Jupyter notebook
utils/benchmarks/jupyter/.ipynb_checkpoints
utils/benchmarks/jupyter/*.png
utils/benchmarks/jupyter/*.html
utils/benchmarks/jupyter/*.pdf



120 changes: 120 additions & 0 deletions e2e-tests/utils/benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
# Partner Chains Benchmarking Tools

This directory contains tools for benchmarking and analyzing Partner Chains node performance.

## Directory Structure

```
benchmarks/
├── README.md # This file
├── download_logs.py # Shared utility for downloading logs from Grafana/Loki
├── block_size_benchmarks/ # Block propagation and size benchmarking
├── mempool_benchmarks/ # Mempool transaction pool benchmarking
└── utils/ # Shared transaction utilities
```

## Overview

### download_logs.py

Shared utility script for downloading logs from Grafana/Loki. Used by all benchmark scripts.

**Usage:**
```bash
python3 download_logs.py \
--config ../../secrets/substrate/performance/performance.json \
--from-time "2026-01-07T10:00:00Z" \
--to-time "2026-01-07T10:10:00Z" \
--node alice --node bob
```

Logs are downloaded to `benchmarks/logs/from_YYYY-MM-DD_HH-MM-SS_to_YYYY-MM-DD_HH-MM-SS/`

### block_size_benchmarks/

Tools for measuring block propagation times and analyzing block creation performance across the network.

Key files:
- `run_benchmark.py` - Main automated workflow script
- `extractor.py` - Extract propagation data from logs
- `analyzer.py` - Generate statistics and analysis
- `README.md` - Detailed usage instructions

See [block_size_benchmarks/README.md](block_size_benchmarks/README.md) for details.

### mempool_benchmarks/

Tools for analyzing mempool transaction pool metrics including ready/future transactions, validation rates, and admission rates.

Key files:
- `run_mempool_benchmark.py` - Main automated workflow script
- `extractor.py` - Extract mempool metrics from logs
- `analyzer.py` - Generate statistics and graphs
- `README.md` - Detailed usage instructions

See [mempool_benchmarks/README.md](mempool_benchmarks/README.md) for details.

### utils/

Shared transaction utilities for creating, funding, and submitting transactions during benchmark runs.

Key scripts:
- `fund_wallets.py` - Fund test wallets with tokens
- `register_dust.py` - Register dust addresses
- `generate_txs_round_robin.py` - Generate round-robin transactions
- `send_batch_txs.py` - Submit transactions in batches
- `send_txs_round_robin.py` - Submit round-robin transactions
- `tx-counter.py` - Count validated transactions in logs

These utilities are used by benchmark scripts to set up test scenarios and analyze results.

## Prerequisites

1. Install Python 3 and required packages:
```bash
pip install pandas requests matplotlib
```

2. Install `sops` for encrypted config files:
```bash
brew install sops
```

3. Set up Grafana access (see block_size_benchmarks/README.md for details)

## Quick Start

### Block Size Benchmarking
```bash
cd block_size_benchmarks
python3 run_benchmark.py \
--config ../../../secrets/substrate/performance/performance.json \
--from-time "2026-01-07T10:00:00Z" \
--to-time "2026-01-07T10:10:00Z" \
--node alice --node bob --node charlie
```

### Mempool Benchmarking
```bash
cd mempool_benchmarks
python3 run_mempool_benchmark.py \
--config ../../../secrets/substrate/performance/performance.json \
--from-time "2026-01-08T10:00:00Z" \
--to-time "2026-01-08T10:10:00Z" \
--window 1000
```

## Output

All benchmark results are saved in timestamped directories under `benchmarks/logs/` with:
- Downloaded node logs
- Extracted metrics and reports
- Statistical analysis
- Generated graphs (PNG)

## Notes

- Default node list includes 20 nodes: alice, bob, charlie, dave, eve, ferdie, george, henry, iris, jack, kate, leo, mike, nina, oliver, paul, quinn, rita, sam, tom
- Log files are automatically sorted by timestamp
- Existing log files are not re-downloaded
- All scripts support encrypted config files via sops
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,26 @@ To download logs from Grafana, you need a service account token:
# Save and exit (sops will automatically re-encrypt)
```

## Transaction Utilities

Transaction creation and submission utilities have been moved to `../utils/` for shared use across benchmarks:
- `fund_wallets.py` - Fund test wallets
- `register_dust.py` - Register dust addresses
- `generate_txs_round_robin.py` - Generate round-robin transactions
- `send_batch_txs.py` - Submit transaction batches
- `send_txs_round_robin.py` - Submit round-robin transactions
- `tx-counter.py` - Count validated transactions in logs

Refer to individual script files for usage instructions.

### Running the Benchmark

The `run_benchmark.py` script automates the entire workflow: downloading logs, extracting data, and generating analysis.

**Using encrypted config file (recommended):**
```bash
python3 run_benchmark.py \
--config ../../secrets/substrate/performance/performance.json \
--config ../../../secrets/substrate/performance/performance.json \
--from-time "2026-01-07T10:00:00Z" \
--to-time "2026-01-07T10:10:00Z" \
--node alice --node bob --node charlie
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ def __init__(self, nodes: List[str]):
raise ValueError("At least one node must be specified")
self.all_nodes = [node.lower() for node in nodes]
self.blocks: List[Block] = []
self.active_nodes: List[str] = [] # Will be populated after parsing

def parse_file(self, filename: str) -> None:
try:
Expand Down Expand Up @@ -60,6 +61,9 @@ def _parse_content(self, content: str) -> None:
current_block.add_import(node, delay)
elif 'Creator unknown' in line and current_block:
current_block.creator = 'unknown'

# Detect which nodes are actually active
self._detect_active_nodes()

def _parse_block_header(self, line: str) -> Optional[Block]:
block_match = re.search(r'Block #(\d+)', line)
Expand Down Expand Up @@ -87,10 +91,29 @@ def _parse_import(self, line: str) -> Tuple[Optional[str], float]:
delay = float(delay_str) if delay_str else 0.0
return node, delay
return None, 0.0

def _detect_active_nodes(self) -> None:
"""Detect which nodes are actually active based on parsed data."""
active_set = set()
for block in self.blocks:
if block.creator and block.creator != 'unknown':
active_set.add(block.creator)
active_set.update(block.imports.keys())

# Keep only nodes from all_nodes that are actually active
self.active_nodes = [node for node in self.all_nodes if node in active_set]

if not self.active_nodes:
self.active_nodes = self.all_nodes

inactive_nodes = set(self.all_nodes) - active_set
if inactive_nodes:
print(f"Note: The following nodes appear to be offline: {', '.join(sorted(inactive_nodes))}")

def get_complete_blocks(self) -> List[Block]:
"""Get blocks that have data from all active nodes."""
return [block for block in self.blocks
if block.is_complete(self.all_nodes)]
if block.is_complete(self.active_nodes)]

def _format_table_row(self, values: List[str], widths: List[int]) -> str:
formatted_values = []
Expand All @@ -107,7 +130,7 @@ def generate_summary_statistics(self, complete_blocks: List[Block]) -> str:
lines.append("")

stats = {}
for node in self.all_nodes:
for node in self.active_nodes:
blocks_created = len([block for block in complete_blocks if block.creator == node])

import_times = [
Expand All @@ -131,7 +154,7 @@ def generate_summary_statistics(self, complete_blocks: List[Block]) -> str:
lines.append(header)
lines.append(separator)

for node in self.all_nodes:
for node in self.active_nodes:
s = stats[node]
row = (f"| {node.capitalize():<7} | {s['blocks_created']:<14} | "
f"{s['blocks_imported']:<15} | {s['min_import']:<15.0f} | "
Expand All @@ -146,16 +169,17 @@ def run(self, input_filename: str, output_filename: str) -> None:
print(f"Parsing file: {input_filename}")
self.parse_file(input_filename)
print(f"Total blocks parsed: {len(self.blocks)}")
print(f"Active nodes detected: {', '.join(self.active_nodes)}")
complete_blocks = self.get_complete_blocks()
print(f"Complete blocks: {len(complete_blocks)}")
print(f"Complete blocks (with all active nodes): {len(complete_blocks)}")
if not complete_blocks:
print("No complete blocks found. Exiting.")
sys.exit(1)
stats_table = self.generate_summary_statistics(complete_blocks)
try:
with open(output_filename, 'w', encoding='utf-8') as file:
file.write("# Block Propagation Analysis\n\n")
nodes = ', '.join(node.capitalize() for node in self.all_nodes)
nodes = ', '.join(node.capitalize() for node in self.active_nodes)
file.write(f"**Nodes analyzed:** {nodes}")
file.write("\n\n")
file.write(stats_table)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# Quick Reference: Time-Based Block Size Analysis

## TL;DR

Analyze block sizes for a specific time period - just one command:

```bash
python3 analyze_block_sizes.py \
--url ws://your-node:9944 \
--time-range '{"from":"2026-01-26 16:20:00","to":"2026-01-26 16:40:00"}'
```

The script automatically:
- Finds existing logs or downloads them if needed
- Extracts block numbers from the time range
- Fetches block data and generates visualizations

## Common Commands

### Use specific node
```bash
python3 analyze_block_sizes.py \
--url ws://your-node:9944 \
--time-range '{"from":"2026-01-26 16:20:00","to":"2026-01-26 16:40:00"}' \
--node alice
```

### Use separate time parameters
```bash
python3 analyze_block_sizes.py \
--url ws://your-node:9944 \
--from-time "2026-01-26 16:20:00" \
--to-time "2026-01-26 16:40:00"
```

### Specify custom log directory
```bash
python3 analyze_block_sizes.py \
--url ws://your-node:9944 \
--log-dir /path/to/custom/logs \
--time-range '{"from":"2026-01-26 16:20:00","to":"2026-01-26 16:40:00"}'
```

### Extract block range only (preview)
```bash
python3 extract_block_range_from_logs.py \
--time-range '{"from":"2026-01-26 16:20:00","to":"2026-01-26 16:40:00"}'
```

### Get JSON output (for scripting)
```bash
python3 extract_block_range_from_logs.py \
--time-range '{"from":"2026-01-26 16:20:00","to":"2026-01-26 16:40:00"}' \
--json
```

## Time Format Notes

- `"2026-01-26 16:20:00"` → Treated as **EST**, converted to UTC
- `"2026-01-26T16:20:00Z"` → Treated as **UTC**
- `"2026-01-26T11:20:00-05:00"` → Explicit **EST** timezone

## Files

- **analyze_block_sizes.py** - Main analysis script (updated with time-based support)
- **extract_block_range_from_logs.py** - Extract block range from logs (new)
- **example_time_based_analysis.sh** - Example workflow script (new)
- **TIME_BASED_ANALYSIS.md** - Detailed documentation (new)
- **README_BLOCKSIZE.md** - General documentation (updated)

## Example Output

```
Extracting block range from logs...
------------------------------------------------------------
Converted EST time '2026-01-26T11:01:00' to UTC: 2026-01-26 16:01:00
Converted EST time '2026-01-26T11:05:41' to UTC: 2026-01-26 16:05:41
Scanning 20 log file(s)...
alice.txt: found 37 blocks
bob.txt: found 36 blocks
...
Found block range: 41973 to 42009

STEP 1: Fetching block size data...
------------------------------------------------------------
Connecting to node at ws://127.0.0.1:9944...
Connected successfully!
Fetching blocks 41973 to 42009 (37 blocks)...
Progress: 100.0% - Block #42009: 12.34 KB (5 extrinsics)

STEP 2: Generating visualizations...
------------------------------------------------------------
...

ANALYSIS COMPLETE
All outputs saved to: block_size_analysis_41973_to_42009/
```

## Troubleshooting

| Issue | Solution |
|-------|----------|
| "No blocks found" | Check time range, try widening it |
| Wrong timezone | Times without TZ are EST by default |
| Can't find logs | Verify `--log-dir` path is correct |
| Missing download_logs.py | Navigate to `../../` directory |

## See Also

- `TIME_BASED_ANALYSIS.md` - Full documentation
- `README_BLOCKSIZE.md` - General block size analysis docs
- `./example_time_based_analysis.sh` - Complete workflow example
Loading
Loading