Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
153 changes: 57 additions & 96 deletions VisionPilot/Production_Releases/0.9/README.md
Original file line number Diff line number Diff line change
@@ -1,116 +1,77 @@
# VisionPilot 0.9 - L2+ Highway Pilot Production Release
# VisionPilot 0.9 – Lateral + Longitudinal Release

This release enables autonomous steering using the EgoLanes and AutoSteer neural networks to detect lane lines determine
steering angle and navigate roads at a predetermined, desired speed.
This release runs **lateral control** (EgoLanes + AutoSteer + PID) and **longitudinal tracking** (AutoSpeed + ObjectFinder + SpeedPlanner + longitudinal PID) in parallel, and publishes all outputs via POSIX shared memory for external consumers.

This includes autonomous lane keeping with cruise control.
## 1. Build

## C++ Inference Pipeline

Multi-threaded lane detection inference system with ONNX Runtime backend.

### Quick Start

[Download](https://github.com/microsoft/onnxruntime/releases) ONNX Runtime for the appropriate CUDA version and OS.

**Set ONNX Runtime path**

Unpack the ONNX runtime archive and set `ONNXRUNTIME_ROOT` to point to the directory as for example:
From `Production_Releases/0.9`:

```bash
export ONNXRUNTIME_ROOT=/path/to/onnxruntime-linux-x64-gpu-1.22.0
```

_Note_: For Jetson AGX download appropriate ONNX runetime from [Jetson Zoo](https://elinux.org/Jetson_Zoo#ONNX_Runtime).

**Build**

[Download](https://github.com/autowarefoundation/autoware.privately-owned-vehicles.git) VisionPilot source code.
Navigate to `VisionPilot/Production_Releases/0.5` subdirectory which looks like:

```
0.5/
├── src/
│ ├── inference/ # Pure inference backend (no visualization)
│ │ ├── onnxruntime_session.cpp/hpp
│ │ ├── onnxruntime_engine.cpp/hpp
│ │ └── README.md
│ └── visualization/ # Visualization module (separate)
│ └── draw_lanes.cpp/hpp
├── scripts/ # Python utilities
├── main.cpp # Multi-threaded pipeline
├── CMakeLists.txt # Build configuration
└── run.sh # Runner script
mkdir -p build
cd build
cmake .. # ONNX Runtime + TensorRT (uses $ONNXRUNTIME_ROOT)
make -j$(nproc)
cd ..
```

and create `build` subdirectory:
Ensure:
- `ONNXRUNTIME_ROOT` points to your ONNX Runtime GPU install.

Check warning on line 18 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (ONNXRUNTIME)
- TensorRT/CUDA are installed.

## 2. Configure (`visionpilot.conf`)

Check warning on line 21 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (visionpilot)

Edit `visionpilot.conf` in this directory:

Check warning on line 23 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (visionpilot)

- **Mode & source**
- `mode=video` or `mode=camera`
- `source.video.path=/path/to/video.mp4`
- **Models**
- `models.egolanes.path=.../Egolanes_fp32.onnx`

Check warning on line 29 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (Egolanes)

Check warning on line 29 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (egolanes)
- `models.autosteer.path=.../AutoSteer_FP32.onnx`

Check warning on line 30 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (autosteer)
- `models.autospeed.path=.../AutoSpeed_n.onnx`

Check warning on line 31 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (autospeed)
- `models.homography_yaml.path=.../homography_2.yaml`
- **Timing**
- `pipeline.target_fps=10.0`
- **Lateral PID**
- `steering_control.Kp/Ki/Kd/Ks`
- **Longitudinal**
- `longitudinal.autospeed.conf_thresh`

Check warning on line 38 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (autospeed)
- `longitudinal.autospeed.iou_thresh`

Check warning on line 39 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (autospeed)
- `longitudinal.ego_speed_default_ms` (used when CAN is disabled/invalid)
- `longitudinal.pid.Kp/Ki/Kd`
- **CAN**
- `can_interface.enabled=true/false`
- `can_interface.interface_name=can0`

## 3. Run

```bash
mkdir -p build && cd build
./run_final.sh # uses /usr/share/visionpilot/visionpilot.conf if present
./run_final.sh ./visionpilot.conf # explicit config path
```

**Build Options**

The pipeline supports two inference backends:
You should see:
- EgoLanes + AutoSteer lateral pipeline initialization
- AutoSpeed + ObjectFinder longitudinal initialization
- “Lateral and Longitudinal pipelines running in PARALLEL…”

1. **ONNX Runtime (default)**: Uses ONNX Runtime with TensorRT execution provider
```bash
cmake -DSKIP_ORT=OFF ../
make -j$(nproc)
```
Requires: `ONNXRUNTIME_ROOT` environment variable set
## 4. Shared Memory Outputs

2. **TensorRT Direct (SKIP_ORT=ON)**: Uses TensorRT directly, bypassing ONNX Runtime
```bash
cmake -DSKIP_ORT=ON ../
make -j$(nproc)
```
Requires: CUDA and TensorRT installed (searches common locations or set `TENSORRT_ROOT`)

**Use this option when:**
- Building on Jetson where ONNX Runtime GPU builds are problematic
- You want to avoid ONNX Runtime dependency
- You only need TensorRT inference
The process publishes a single shared-memory segment with all outputs:

**Default Build (ONNX Runtime)**
- Name: `/visionpilot_state`

Check warning on line 62 in VisionPilot/Production_Releases/0.9/README.md

View workflow job for this annotation

GitHub Actions / spell-check-differential

Unknown word (visionpilot)
- Struct: `VisionPilotState` (see `include/publisher/visionpilot_shared_state.hpp`)
- Lateral: steering angles, PathFinder CTE/yaw/curvature, lane departure flag
- Longitudinal: CIPO distance/velocity, RSS safe distance, ideal speed, FCW/AEB flags, longitudinal control effort
- CAN/ego: speed, steering angle, validity

```bash
cmake ../
make -j$(nproc)
cd ..
```
### Quick test reader

**Configure and Run**
From `0.9`:

```bash
# Edit run.sh to set paths and options
./run.sh
./tools/shm_reader # live view while visionpilot is running
./tools/shm_reader --once # single snapshot
```

### Configuration (run.sh)

- `VIDEO_PATH`: Input video file
- `MODEL_PATH`: ONNX model (.onnx)
- `PROVIDER`: cpu or tensorrt (ignored when `SKIP_ORT=ON`, always uses TensorRT)
- `PRECISION`: fp32 or fp16 (TensorRT only)
- `DEVICE_ID`: GPU device ID
- `CACHE_DIR`: TensorRT engine cache directory
- `THRESHOLD`: Segmentation threshold (default: 0.0)
- `MEASURE_LATENCY`: Enable performance metrics
- `ENABLE_VIZ`: Enable visualization window
- `SAVE_VIDEO`: Save annotated output video
- `OUTPUT_VIDEO`: Output video path

**Note**: When building with `SKIP_ORT=ON`, the `PROVIDER` argument is ignored and TensorRT is always used directly.

### Performance

- **CPU**: 20-40ms per frame
- **TensorRT FP16**: 2-5ms per frame (200-500 FPS capable)

### Model Output

3-channel lane segmentation (320x640):
- Channel 0: Ego left lane (blue)
- Channel 1: Ego right lane (magenta)
- Channel 2: Other lanes (green)
If VisionPilot is running correctly you will see frame IDs increasing and CIPO / steering values updating. When VisionPilot stops, `shm_reader` will show the last published frame until the segment is unlinked.
21 changes: 21 additions & 0 deletions VisionPilot/Production_Releases/0.9/VisionPilot.conf.example
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,12 @@ models.autosteer.path=/path/to/AutoSteer_FP32.onnx
models.autospeed.path=/path/to/AutoSpeed.onnx
models.homography_yaml.path = /path/to/homography_yaml.yaml

# ============================================
# Pipeline Timing
# ============================================
# Target processing rate for capture thread (Hz)
pipeline.target_fps=10.0

# ============================================
# Steering Control Parameters
# ============================================
Expand All @@ -51,6 +57,21 @@ steering_control.Ki=0.01
steering_control.Kd=-0.40
steering_control.Ks=-0.3

# ============================================
# Longitudinal Control (AutoSpeed + ObjectFinder + SpeedPlanner + PID)
# ============================================
# AutoSpeed detection thresholds
longitudinal.autospeed.conf_thresh=0.5
longitudinal.autospeed.iou_thresh=0.5

# Fallback ego speed (m/s) when CAN is disabled or invalid
longitudinal.ego_speed_default_ms=10.0

# Longitudinal PID gains for ideal_speed tracking
longitudinal.pid.Kp=0.5
longitudinal.pid.Ki=0.1
longitudinal.pid.Kd=0.05

# ============================================
# Output Configuration
# ============================================
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,19 @@ struct Config {
bool enabled;
std::string interface_name;
} can_interface;

// Longitudinal & pipeline tuning
struct {
float autospeed_conf_thresh; // Detection confidence threshold
float autospeed_iou_thresh; // NMS IoU threshold
double ego_speed_default_ms; // Fallback ego speed when CAN is unavailable
double pid_Kp; // Longitudinal PID gains
double pid_Ki;
double pid_Kd;
} longitudinal;

// Global capture frame rate (Hz)
double capture_fps;
};

class ConfigReader {
Expand Down
22 changes: 11 additions & 11 deletions VisionPilot/Production_Releases/0.9/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1769,8 +1769,8 @@ int main(int argc, char** argv)
// Load longitudinal config (using same provider/precision as lateral for now)
std::string autospeed_model_path = config.models.autospeed_path;
std::string homography_yaml_path = config.models.homography_yaml_path;
float autospeed_conf_thresh = 0.5f; // TODO: Add to config
float autospeed_iou_thresh = 0.5f; // TODO: Add to config
float autospeed_conf_thresh = config.longitudinal.autospeed_conf_thresh;
float autospeed_iou_thresh = config.longitudinal.autospeed_iou_thresh;

std::cout << "\n========================================" << std::endl;
std::cout << "LONGITUDINAL PIPELINE INITIALIZATION" << std::endl;
Expand Down Expand Up @@ -1891,7 +1891,8 @@ int main(int argc, char** argv)
// Launch threads - single capture broadcasts to both pipelines via double buffer
std::thread t_capture(captureThread, source, is_camera,
std::ref(shared_frame_buffer),
std::ref(metrics), std::ref(running), can_interface.get(), 10.0); // 10 FPS
std::ref(metrics), std::ref(running),
can_interface.get(), config.capture_fps);

// Lateral pipeline (reads from shared buffer)
std::thread t_lateral_inference(lateralInferenceThread, std::ref(engine),
Expand All @@ -1902,14 +1903,13 @@ int main(int argc, char** argv)
autosteer_engine.get());

// Longitudinal pipeline (reads from shared buffer, parallel execution)
// TODO: replace 10.0 with can_interface->getState().speed_kmph (convert to m/s)
// once CAN bus speed is validated.
constexpr double kStaticEgoSpeedMs = 10.0;
// Longitudinal PID controller gains
constexpr double kLongitudinalKp = 0.5; // Proportional gain
constexpr double kLongitudinalKi = 0.1; // Integral gain
constexpr double kLongitudinalKd = 0.05; // Derivative gain

// NOTE: ego_speed_default_ms is only used when CAN speed is unavailable.
const double kStaticEgoSpeedMs = config.longitudinal.ego_speed_default_ms;
// Longitudinal PID controller gains (configurable)
const double kLongitudinalKp = config.longitudinal.pid_Kp;
const double kLongitudinalKi = config.longitudinal.pid_Ki;
const double kLongitudinalKd = config.longitudinal.pid_Kd;

std::thread t_longitudinal_inference(longitudinalInferenceThread,
std::ref(*autospeed_engine),
std::ref(*object_finder),
Expand Down
33 changes: 32 additions & 1 deletion VisionPilot/Production_Releases/0.9/src/config/config_reader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,38 @@ Config ConfigReader::loadFromFile(const std::string& config_path) {

config.can_interface.enabled = parseBool(props["can_interface.enabled"]);
config.can_interface.interface_name = props["can_interface.interface_name"];


// Longitudinal & pipeline tuning (with sensible defaults if keys are missing)
config.longitudinal.autospeed_conf_thresh =
props.find("longitudinal.autospeed.conf_thresh") != props.end()
? parseFloat(props["longitudinal.autospeed.conf_thresh"])
: 0.5f;
config.longitudinal.autospeed_iou_thresh =
props.find("longitudinal.autospeed.iou_thresh") != props.end()
? parseFloat(props["longitudinal.autospeed.iou_thresh"])
: 0.5f;
config.longitudinal.ego_speed_default_ms =
props.find("longitudinal.ego_speed_default_ms") != props.end()
? parseDouble(props["longitudinal.ego_speed_default_ms"])
: 10.0;
config.longitudinal.pid_Kp =
props.find("longitudinal.pid.Kp") != props.end()
? parseDouble(props["longitudinal.pid.Kp"])
: 0.5;
config.longitudinal.pid_Ki =
props.find("longitudinal.pid.Ki") != props.end()
? parseDouble(props["longitudinal.pid.Ki"])
: 0.1;
config.longitudinal.pid_Kd =
props.find("longitudinal.pid.Kd") != props.end()
? parseDouble(props["longitudinal.pid.Kd"])
: 0.05;

config.capture_fps =
props.find("pipeline.target_fps") != props.end()
? parseDouble(props["pipeline.target_fps"])
: 10.0;

return config;
}

Expand Down
Loading