Quick reference for running liblpm benchmarks locally and remotely.
- Local: CMake, GCC/Clang
- DPDK: Docker with
--privilegedaccess - Remote: SSH key authentication
# Build and run all algorithms
./scripts/run_algorithm_benchmarks.sh
# Run specific algorithm
./scripts/run_algorithm_benchmarks.sh -a dir24
# Run only batch lookups on CPU core 2
./scripts/run_algorithm_benchmarks.sh -t batch -c 2Results: benchmarks/data/algorithm_comparison/<cpu>_<ip>_<type>/
# Run locally
./scripts/run_dpdk_algorithm_scaling.sh
# Force rebuild Docker image
./scripts/run_dpdk_algorithm_scaling.sh --rebuild
# Custom output directory
./scripts/run_dpdk_algorithm_scaling.sh -o /tmp/resultsResults include DPDK comparison data.
# Upload and run
./scripts/upload_and_benchmark.sh user@server.com
# With specific options
./scripts/upload_and_benchmark.sh user@server.com -a dir24 -t singleAutomatically organizes results for CPU comparison.
# Upload pre-built Docker image and run
./scripts/run_dpdk_benchmark.sh root@192.168.0.13Faster than building on remote server.
./scripts/generate_algorithm_charts.shCompares different algorithms on the same CPU.
Output: docs/images/<cpu>_<ip>_<type>.png
./scripts/generate_cpu_comparison_charts.shCompares same algorithm across different CPUs.
Output: docs/images/<algo>_<ip>_<type>_cpu_comparison.png
# Custom chart
python scripts/plot_lpm_benchmark.py \
benchmarks/data/algorithm_comparison/my_cpu_ipv4_single/*.csv \
--output docs/images/my_chart.png \
--title "Custom Title"
# CPU comparison
python scripts/plot_lpm_benchmark.py \
benchmarks/data/cpu_comparison/dir24_ipv4_single/*.csv \
--output docs/images/dir24_cpu_comparison.png# 1. Run benchmarks
./scripts/run_algorithm_benchmarks.sh
# 2. Generate charts
./scripts/generate_algorithm_charts.sh# 1. Run on first machine
./scripts/run_algorithm_benchmarks.sh
# 2. Run on additional machines
./scripts/upload_and_benchmark.sh user@server1.com
./scripts/upload_and_benchmark.sh user@server2.com
# 3. Generate comparison charts
./scripts/generate_cpu_comparison_charts.sh# 1. Local
./scripts/run_dpdk_algorithm_scaling.sh
# 2. Remote
./scripts/run_dpdk_benchmark.sh user@server.com
# 3. Organize CPU comparison data
python3 << 'EOF'
from pathlib import Path
algo_comp = Path("benchmarks/data/algorithm_comparison")
cpu_comp = Path("benchmarks/data/cpu_comparison")
cpu_comp.mkdir(exist_ok=True)
for cpu_dir in algo_comp.iterdir():
if not cpu_dir.is_dir():
continue
parts = cpu_dir.name.rsplit("_", 2)
if len(parts) == 3:
cpu_name, ip_ver, lookup = parts
for csv in cpu_dir.glob("*.csv"):
target = cpu_comp / f"{csv.stem}_{ip_ver}_{lookup}"
target.mkdir(exist_ok=True)
(target / f"{cpu_name}.csv").write_bytes(csv.read_bytes())
EOF
# 4. Generate all charts
./scripts/generate_algorithm_charts.sh
./scripts/generate_cpu_comparison_charts.shbenchmarks/data/
├── algorithm_comparison/ # Per-CPU results
│ ├── <cpu>_ipv4_single/
│ │ ├── dir24.csv
│ │ ├── 4stride8.csv
│ │ └── dpdk.csv
│ └── <cpu>_ipv6_batch/...
│
└── cpu_comparison/ # Per-algorithm results
├── dir24_ipv4_single/
│ ├── <cpu1>.csv
│ └── <cpu2>.csv
└── dpdk_ipv4_batch/...
docs/images/
├── <cpu>_ipv4_single.png # Algorithm comparison
└── dir24_ipv4_single_cpu_comparison.png # CPU comparison
- Pin to CPU: Use
-cflag to pin to specific core - Static binary:
bench_algorithm_scalingis statically linked for easy remote deployment - DPDK Docker: Pre-build locally, upload image (faster than remote build)
- Batch runs: Use loops for multiple remote servers
"Benchmark binary not found"
mkdir -p build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make bench_algorithm_scaling"Docker privilege error"
- DPDK requires
--privilegedflag (automatically added by scripts)
"SSH connection failed"
ssh-copy-id user@server
ssh user@server 'echo OK'- Algorithm details: See
README.md - Docker setup: See
docs/DOCKER.md - CI/CD integration: See
docs/CICD.md