Skip to content

philipph77/Route2Vec

Repository files navigation

Route2Vec: Context-Preserving Encodings of Routes

Author: Philipp Hallgarten, Thomas Kosch, Tobias Grosse-Puppendahl, Enkelejda Kasneci

This is the accompanying repository for the paper Route2Vec: Enabling Efficient Use of Route Context through Contextualized Route Representations, published at the Mensch und Computer 2025



Implementation based on :

Abstract

Understanding how vehicle occupants experience their journey is key to designing adaptive in-car systems. The environments they encounter, ranging from road types and traffic patterns to weather conditions, shape their mental and emotional states during a ride. Yet, leveraging this contextual information remains a challenge due to its heterogeneous nature, including several data types, including categorical, numerical, and boolean data types that vary in scale and structure. We introduce Route2Vec, an attention-based framework that encodes variable-length sequences of route context into compact, semantically meaningful embeddings using a self-supervised learning pipeline. These fixed-size representations allow for efficient comparisons between different driving situations using common similarity metrics such as Euclidean distance. Through linear probing and qualitative analysis of the embedding space, we show that Route2Vec reliably captures salient, route-specific characteristics. Route2Vec simplifies context-aware in-vehicle interaction by enabling designers to rapidly prototype intelligent in-vehicle interfaces.

Setup

We recommend to create a new environment if working with this repository. You can do so through
conda env create -n ENVNAME --file env.yml

This will also install all necessary dependencies.

Training

To train Route2Vec on a custom dataset, you will have to take the follwoing steps:
  1. Add a dataset in the datasets folder
  2. Add a call of main_server.py to run.sh with all the necessary hyperparameters
  3. Start the training by running run.sh

Use of Plug-And-Play Road Context Encoders

The trained encoder models can be found in the folder route2vec_models. We make four version available:
  • small: embedding size = 128 / 2.4M params
  • medium: embedding size = 256 / 5.3M params
  • large: embedding size = 512 / 12.6M params
  • x-large: embedding size = 1024 / 33.6M params

For more information on how to load and use a pretrained PyTorch model please refer to the official documentation.

Citation

If you found this work useful, please cite:
@inproceedings{hallgarten2025route,
author = {Hallgarten, Philipp and Kosch, Thomas and Grosse-Puppendahl, Tobias and Kasneci, Enkelejda},
title = {Route2Vec: Enabling Efficient Use of Driving Context through Contextualized Route Representations},
year = {2025},
isbn = {9798400715822},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3743049.3743056},
doi = {10.1145/3743049.3743056},
booktitle = {Proceedings of the Mensch Und Computer 2025},
pages = {322–332},
numpages = {11},
keywords = {Context-Aware Systems, Representation Learning},
location = {
},
series = {MuC '25}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published