📡 Radar–Language Models Survey
Radar–Language Models (RLMs) integrate radar sensing with large language models (LLMs) to enable semantic understanding and reasoning over radar data. By combining the robustness of radar with language-driven inference, RLMs provide a promising solution for perception in challenging environments such as low visibility, occlusion, and electromagnetic interference.
This repository accompanies a survey that reviews recent progress in Radar–Language Models, covering radar modalities (FMCW, mmWave, UWB), common radar representations (range–Doppler, micro-Doppler, point clouds), and multimodal alignment strategies that connect radar observations to linguistic concepts.
🎯 Scope & Contributions
This survey and repository aim to:
-
Systematically organize existing work on Radar–Language Models and related multimodal radar–LLM systems
-
Categorize methods by radar modality, data representation, and language integration strategy
-
Review model architectures, training paradigms, datasets, and evaluation protocols
-
Summarize application domains including human sensing, robotics, autonomous systems
-
Identify open challenges and future research directions toward interpretable and robust radar intelligence