Open-source lane detection with two complementary approaches:
- A classical OpenCV pipeline (fast, lightweight).
- A deep-learning U-Net segmentation pipeline (accurate once trained).
- Quick start
- How the OpenCV pipeline works
- How the U-Net pipeline works
- Repository layout
- Tips and troubleshooting
- License
- Authors
OpenCV:
U-Net:
- Install dependencies
git clone https://github.com/zimbakovtech/LaneDetectionCV.git
cd LaneDetectionCV
python -m venv venv
pip install -r requirements.txt- Run the OpenCV pipeline on all videos in
data/raw/and write outputs toresults/
python src/opencv_pipeline/main.py- U-Net pipeline (training/inference)
The U-Net code and scripts are under src/u_net_pipeline/. See the script help for exact arguments.
# Prepare dataset and visualize samples
python main.py prepare --fps 5 --img-width 512 --img-height 256
# Train U-Net model
python main.py train --epochs 10 --batch-size 4 --model-out models/best_model.h5
# Run U-Net inference
python main.py infer --input data/raw/road_video_5.mp4 --model models/best_model.h5 --output results/output_video.mp4Entry points:
src/opencv_pipeline/detect.py— per-frame lane detection and drawing.src/opencv_pipeline/main.py— batch process all videos indata/raw/.
Processing steps in detect.py:
-
Grayscale conversion
cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)produces a single-channel image that’s less sensitive to color changes and faster to process.
-
Gaussian blur
cv2.GaussianBlur(gray, (5, 5), 0)reduces noise and small textures that create spurious edges.
-
Canny edge detection
cv2.Canny(blur, 100, 200)highlights strong intensity gradients (lane paint boundaries). Thresholds can be tuned for your camera/lighting.
-
Region of interest (ROI)
- We apply a polygon mask (a roadway triangle) to focus on the road area and ignore sky/hood.
-
Hough transform (probabilistic)
cv2.HoughLinesP(...)extracts short line segments from the edge map.- Parameters (
rho,theta,threshold,minLineLength,maxLineGap) control sensitivity and segment continuity.
-
Left/right separation and slope filtering
- Segments with |slope| < 0.5 are rejected (nearly horizontal).
- Negative slope → left, positive slope → right, in the camera reference frame.
-
Robust line fitting
- We fit a single line per side with
np.polyfit(y, x, 1)to combine many segments into one stable lane line. - Drawing is intentionally limited to a vertical band of the image (
DRAW_Y_TOP_RATIO,DRAW_Y_BOTTOM_RATIO) so lines don’t extend too far off-road.
- We fit a single line per side with
-
Handling dashed/broken lines (gap filling)
- A small stateful imputer predicts missing lane endpoints over short gaps using recent history (linear extrapolation over time).
- This reduces blinking when dashed lines appear/disappear across frames.
-
Rendering
- Final lines are drawn in red on the original frame. Output can be previewed live and written to video.
Key files and helpers (may be used by the pipeline):
src/opencv_pipeline/detect.py— main logic described above (ROI, Canny, Hough, fit, impute, draw).src/opencv_pipeline/functions/region_of_interest.pyanddraw_lines.py— modular ROI/drawing helpers.
In src/opencv_pipeline/detect.py:
- Canny thresholds: increase for fewer edges/noise, decrease to pick up faint paint.
- Hough parameters: raise
threshold/minLineLengthto reduce false positives; increasemaxLineGapto bridge gaps. slopefilter (currently 0.5): raise to ignore more shallow segments; lower to accept flatter lanes.DRAW_Y_TOP_RATIO,DRAW_Y_BOTTOM_RATIO: control how long lines are drawn (shorter to avoid leaving the road).- Missing imputer window/gap: widen if your dashed lines are longer and you need more persistence.
The U-Net approach performs per-pixel segmentation to classify lane markings, then post-processes the mask to visualize lanes. Typical steps:
-
Data preparation
- Extract frames and, if available, masks (ground truth) from
data/raw/into a training set.
- Extract frames and, if available, masks (ground truth) from
-
Model
- A U-Net model implemented in Keras/TensorFlow under
src/u_net_pipeline/(seesrc,scripts, andmodelssubfolders).
- A U-Net model implemented in Keras/TensorFlow under
-
Training
- Use the provided training script(s) under
src/u_net_pipeline/scripts/to train on your dataset. Check each script’s-hfor exact args.
- Use the provided training script(s) under
-
Inference
- Load a trained model (e.g.,
src/u_net_pipeline/models/best_model.h5) and run inference on a video to produce a lane mask and overlay.
- Load a trained model (e.g.,
Notes:
- Deep models can outperform classical methods on difficult lighting and worn markings, but require annotated data and GPU time to train.
- Start with the OpenCV pipeline for quick results, then move to U-Net if you need higher robustness.
RoadLaneDetection/
├── data/
│ ├── raw/ # Input videos
│ └── processed/ # (optional) prepared frames/masks
├── src/
│ ├── opencv_pipeline/
│ │ ├── detect.py # OpenCV lane detection (Canny + Hough + fit + impute)
│ │ ├── main.py # Batch process all videos in data/raw
│ │ └── functions/
│ │ ├── region_of_interest.py
│ │ ├── draw_lines.py
│ │ └── preprocess.py
│ └── u_net_pipeline/
│ ├── main.py # Prepare dataset, train model & infer video
│ ├── models/
│ │ └── best_model.h5 # Example trained model (if present)
│ ├── scripts/ # Training & inference helpers
│ └── src/ # Model and utils
├── results/ # Outputs (videos, models, logs)
├── requirements.txt
├── .gitignore
├── LICENSE
└── README.md
- If output writes but playback looks choppy, ensure output FPS matches input FPS (the scripts handle this automatically where possible).
- If lanes are noisy: increase Canny thresholds and Hough
threshold, or narrow the ROI polygon. - If dashed lines blink: increase the imputer window/gap slightly.
- For different camera FOVs/heights, adjust the ROI triangle.
This project is licensed under the MIT License.
See the LICENSE file for details.
Prepared by Damjan Zimbakov & Efimija Cuneva
July 2025



