Skip to content

ElGrorg/R3-MNE-Reachy-Astromech

Repository files navigation

Reachy Mini Astromech Droid

Turn your Reachy Mini into R3-MNE, a Star Wars-like astromech droid! This application combines local AI processing, expressive movements, and classic droid sounds to bring your robot to life.

R3-MNE and R2-D2

Features

  • Astromech Persona: The robot communicates using pre-recorded "droid speak" audio files, mimicking the emotional beeps and whistles of an R2 unit.
  • Local Speech-to-Text: Uses faster-whisper for fast, private, and offline-capable speech recognition.
  • Sentiment Analysis: Analyzes your speech using vaderSentiment to understand your emotional tone (happy, sad, etc.) and responds with appropriate droid emotions.
  • Expressive Motion: A layered motion system blends dances, emotional gestures, and subtle "alive" movements like breathing and head wobble.

Installation

Windows support is currently experimental and has not been extensively tested. Use with caution.

Using uv

You can set up the project quickly using uv:

uv venv --python 3.12.1  # Create a virtual environment with Python 3.12.1
source .venv/bin/activate
uv sync

Using pip

python -m venv .venv # Create a virtual environment
source .venv/bin/activate
pip install -e .

Running the app

If you have the wireless version, simply turn on the Reachy Mini and run:

r3-mne --gradio --no-camera

If you have the lite version, you will need to create two virtual environments, one for the Reachy Mini Daemon and one for the R3-MNE app. In one terminal, navigate to the project root, activate the virtual environment, and run the Reachy Mini Daemon:

reachy-mini-daemon

In a second terminal, navigate to the project root, activate the virtual environment, and run:

r3-mne --gradio --no-camera

By default, the app runs in console mode for direct audio interaction. Use the --gradio flag to launch a web UI served locally at http://127.0.0.1:7860/ (Currently required when running).

CLI options

Option Default Description
--gradio False Launch the Gradio web UI. Without this flag, runs in console mode. Required when running in simulation mode.
--head-tracker None Enable head tracking. Options: yolo, none
--no-camera False Disable camera pipeline.
--debug False Enable verbose logging for troubleshooting.

License

Apache 2.0

Open Issues

  • Windows Support: Currently experimental. Users may encounter issues with audio drivers or dependency installation.
  • Latency: While local processing is fast, the full pipeline (STT -> Sentiment -> Action) can sometimes have a slight delay depending on hardware.
  • Vision & Camera: Camera integration, face tracking, and vision models (local or cloud) are implemented but currently untested. Use at your own risk.

Next Steps

  • Expanded Droid Vocabulary: Adding more diverse audio samples for a wider range of emotions.
  • Interactive Games: Implementing simple games (like "Red Light, Green Light") using the vision system.
  • Improved Face Tracking: Optimizing the local YOLO/MediaPipe trackers for smoother head movements.
  • Custom Personalities: Easier configuration to switch between different "droid personalities" (e.g., sassy, helpful, timid).
  • Patch Console Mode: Currently, R3 cannot receive audio in console mode. It would be ideal if gradio was not necessary

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages