|
3 | 3 |
|
4 | 4 | > *The user brings the algorithm. LAMBKIN handles the rest.* |
5 | 5 |
|
6 | | -LAMBKIN is a Python SDK for building SLAM evaluation pipelines that are reproducible, structured, and parallelized by design. |
| 6 | +LAMBKIN is a Python SDK for building SLAM evaluation pipelines that are reproducible and structured by design. |
7 | 7 |
|
8 | 8 | ## Philosophy |
9 | 9 |
|
10 | 10 | Most benchmarking systems are built around a specific algorithm, dataset format, or middleware stack. Adapting them to a new setup means working around assumptions that were never designed to be removed. Reproducing a run means knowing which constants changed and when. Adding a new algorithm or dataset variant means touching plumbing that was never meant to be touched. |
11 | 11 |
|
12 | 12 | LAMBKIN separates the orchestration machinery from the benchmark definition. The algorithm runs as an external process — LAMBKIN does not need to know what is inside it. Parameter sweeps, process lifecycle, I/O, and metric collection are all handled by the SDK, so your script stays focused on the benchmark logic. |
13 | 13 |
|
14 | | -## What it provides |
| 14 | +## Capabilities |
15 | 15 |
|
16 | | -| | | |
| 16 | +|Feature |Description | |
17 | 17 | |---|---| |
18 | | -| **Parameter sweeps** | Declare combinations of algorithms, datasets, and parameters and run them concurrently | |
19 | | -| **Process lifecycle** | Launch, supervise, and terminate external processes automatically | |
20 | | -| **Organized results** | Every run is written to a structured, traceable output directory | |
21 | | -| **Reproducibility** | Benchmarks are defined as code — versionable and runnable by anyone with the same environment | |
| 18 | +| **Parameter sweeps** | Declare combinations of algorithms, datasets, and parameters. LAMBKIN runs each combination as an independent iteration. | |
| 19 | +| **Process lifecycle** | Launch, supervise, and terminate external processes automatically across benchmark iterations. | |
| 20 | +| **Pipeline stages** | Structure your benchmark into ingestion, execution, and egression stages, each independently customizable. | |
| 21 | +| **Context passing** |Carry configuration, paths, and state through the pipeline without coupling stages to each other.| |
| 22 | + |
| 23 | +To understand how LAMBKIN works under the hood, see the [SDK documentation](src/lambkin/README.md). |
22 | 24 |
|
23 | 25 |
|
24 | 26 | ## Examples |
25 | 27 |
|
26 | | -The [`examples/`](examples/) directory contains ready-to-run setups, each packaging a specific system with its own environment and documentation. |
27 | | -To understand how LAMBKIN works under the hood, see the [SDK documentation](src/lambkin/README.md). |
| 28 | +The [`examples/`](examples/) directory contains ready-to-run setups, each packaging a specific system with its own Docker environment, ROS2 package, and documentation. Each integration is self-contained and optional — the SDK works independently of any of them. |
| 29 | + |
| 30 | +Current examples: |
| 31 | + |
| 32 | +* [examples/beluga/](examples/beluga/README.md) — Beluga AMCL localization, with a worked benchmark script and Docker setup. |
0 commit comments