Description: OpenV2X is a modular cyber-physical framework designed for real-time environmental perception and interactive feedback in software-defined vehicles. The system integrates vision-based perception with V2X communication capabilities to enable comprehensive scene reconstruction and human-in-the-loop feedback mechanisms.
This framework addresses critical challenges in autonomous driving by providing:
- Modular architecture: Independent, swappable components for object detection, lane detection, and communication
- Real-time performance: Optimized for embedded edge deployment with minimal latency
- Dual lane detection approaches: Support for both classical computer vision pipelines and deep learning models (UFLD)
- V2X communication: MQTT-based vehicle-to-everything messaging for cooperative perception
- Interactive feedback system: User interface for anomaly reporting and continuous system improvement
The complete methodology and validation results are detailed in the accompanying IEEE paper (included in this repository).
- Object Detection Module: YOLOv5-based vehicle detection with integrated orientation classification (frontal, rear, lateral)
- Lane Detection Module: Dual implementation supporting both classical vision pipeline and Ultra Fast Lane Detection (UFLD)
- V2X Communication Layer: MQTT-enabled bidirectional messaging for infrastructure integration
- Environment Reconstruction: Real-time semantic scene representation combining perception and V2X data
- Anomaly Reporting Interface: GUI for driver feedback collection to improve perception models post-deployment
- Energy Profiling: CodeCarbon integration for sustainability assessment
- Embedded-Ready: Validated on Raspberry Pi for edge deployment scenarios
The framework operates in three functional layers:
- Perception Module: Processes monocular RGB camera input for object and lane detection
- Communication Module: Handles V2X message publishing/subscribing via MQTT broker
- Environment Reconstruction Module: Fuses perception and V2X data for semantic scene representation with user feedback capabilities
The system uses pre-trained weights for vehicle orientation recognition based on the Vehicle Orientation Dataset.
The Ultra Fast Lane Detection (UFLD) approach requires pre-trained models for TuSimple and CULane datasets:
- Download: tusimple_18.pth
- Destination: Place in the
models/directory
- Download: culane_18.pth
- Destination: Place in the
models/directory
Note: Both models must be downloaded and placed in the models/ directory for the UFLD lane detection module to function properly.
The project uses the following libraries:
PyTorchultralyticsused for YOLO (car detection with DataSet)tkintercodecarbonmatplotlibopencv-pythonnumpy
Make sure you have the following software and libraries installed:
- Python 3.8.x
- Required Python packages (listed in
requirements.txt)
- Clone the repository:
git clone https://github.com/self_driving_vision_and_reconstruction.git
cd self_driving_vision_and_reconstructionpip install -r requirements.txt
Additionally, download the dataset and model weights:
-
Download the dataset from this link.
-
Download the file
best.pt, rename it toyolov5_vehicle_oriented.pt, and place it in theyolodirectory.python Open_V2X_fremewor.py
An advanced system for 3D environment reconstruction for autonomous driving, similar to Tesla Vision.
The framework supports two lane detection approaches to demonstrate modularity:
- Classical Vision Pipeline (traditional computer vision with ROI, Sobel/HLS filters)
- Ultra Fast Lane Detection (UFLD) (deep learning-based approach)
To switch between methods, modify the pipeline_check variable in Open_V2X_framework.py:
pipeline_check = True # Use classical vision pipeline
pipeline_check = False # Use UFLD (deep learning)By default, the system uses a YouTube video (line 213). To test different environmental conditions, modify line 228:
video_path = video_path[0] # YouTube video (default)
video_path = video_path[4] # Night conditions
video_path = video_path[5] # Rain conditions
video_path = video_path[6] # Daylight conditionsIf using the classical vision pipeline (pipeline_check = True), you must specify the environmental condition matching your selected video:
img, lane, parameters = pipeline(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB),
mtx, dist, "Day") # Options: "Day", "Night", "Rain"Ensure this parameter matches your video selection from step 2.
To add a custom video when using the pipeline approach:
- Camera Calibration: Perform calibration for your specific video source
- Configure ROI: In
modules/Object_detection_module/lane_detection_pipeline.py:- Add your configuration to
configurazione_base - Define the polygon for the Region of Interest (ROI) adapted to your video's perspective
- The pipeline requires manual adaptation of these parameters for each new video source
- Add your configuration to
When using Ultra Fast Lane Detection (pipeline_check = False):
- No calibration required
- Simply select the appropriate video type as described in Configuration Options
- The deep learning model generalizes across different scenarios without manual tuning
Based on validation results from the IEEE paper:
- Object Detection: 84.3% accuracy with orientation classification at 23 FPS
- Lane Detection (Pipeline): 0.87 IoU (daylight), 0.73 IoU (rain)
- Lane Detection (UFLD): 0.90 IoU (daylight), 0.60 IoU (rain)
- V2X Communication: <1ms average latency, zero message loss
- Energy Consumption: 0.002783 kWh per inference cycle
The framework provides real-time semantic reconstruction combining lane geometry, detected vehicles with orientation, distance estimation, and V2X event overlays.
The GUI allows users to report system anomalies by:
- Providing textual descriptions
- Capturing the current video frame
- Sending reports to OEMs for continuous model improvement
This work remains open to extensions and improvements, including:
- Integration of additional perception modules (traffic sign recognition, pedestrian detection)
- Enhanced V2X communication protocols (C-V2X, DSRC)
- Expanded anomaly reporting categories
- Additional sensor fusion capabilities
- ASIL-compliant safety-critical module development
Copyright 2024 LuigiPP
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
If you use OpenV2X in your research or projects, please cite the following paper:
Castiglione, A., Cimmino, L., Nappi, M., Sica, L. E.
OpenV2X: A Modular Cyber-Physical Framework for Vision-Driven Environmental Perception and Interactive Feedback in Software-Defined Vehicles
IEEE Transactions on Industrial Informatics, 2025.
DOI: https://doi.org/10.1109/TII.2025.3641528
@ARTICLE{11313331,
author={Castiglione, Aniello and Cimmino, Lucia and Nappi, Michele and Sica, Luigi Emanuele},
journal={IEEE Transactions on Industrial Informatics},
title={OpenV2X: A Modular Cyber-Physical Framework for Vision-Driven Environmental Perception and Interactive Feedback in Software-Defined Vehicles},
year={2025},
pages={1--11},
doi={10.1109/TII.2025.3641528}
}



