Skip to content

Latest commit

 

History

History
8 lines (8 loc) · 687 Bytes

File metadata and controls

8 lines (8 loc) · 687 Bytes

Autonomous vehicles require precise perception of their surroundings to navigate safely. A crucial

component is 3D object detection, which utilizes sensors such as cameras to identify and locate nearby objects. Multi-modal data fusion combining input from various sensors addresses the limitations of single-sensor methods, thereby enhancing detection accuracy. This paper reviews recent advancements in multi-modal fusion for 3D object detection, focusing on how different methods integrate data to improve system reliability. Additionally, it evaluates the performance of the Autocare software on embedded platforms, assessing its capability for real-time autonomous driving.