Welcome to the wirepod-llm8850-raspberrypi project wiki! This project documents how to set up a fully local, on-device large language model (LLM) for your Anki Vector robot using the LLM8850 AI accelerator module on a Raspberry Pi running WirePod.
Anki Vector is a small home robot that originally relied on cloud services (Anki/Digital Dream Labs) for its AI responses. WirePod is an open-source server replacement that lets Vector work without those cloud services.
This project extends WirePod by integrating the LLM8850 — an affordable AI inference module — connected to a Raspberry Pi, so Vector can answer questions and hold conversations entirely offline using a locally-running LLM.
Follow these guides in order for a complete setup:
- Prerequisites — Hardware and software requirements before you begin
- Hardware Setup — Connecting the LLM8850 module to your Raspberry Pi
- Installing WirePod — Setting up the WirePod server on Raspberry Pi
- Configuring the LLM — Loading a model onto the LLM8850 and connecting it to WirePod
- Pairing Vector — Pointing your Anki Vector robot to your local WirePod server
- Troubleshooting — Common issues and how to resolve them
| Component | Role |
|---|---|
| Anki Vector | The robot that delivers LLM-generated responses |
| WirePod | Open-source cloud-server replacement for Vector |
| LLM8850 | AI accelerator module for efficient on-device LLM inference |
| Raspberry Pi | Host hardware that runs WirePod and interfaces with the LLM8850 |
- Privacy — Conversations never leave your local network
- Offline operation — Vector keeps working without an internet connection
- No subscription fees — No dependency on paid cloud AI services
- Customization — Choose and tune the language model to your liking
Found an error or want to add a guide? Contributions are welcome!
- Open an issue to report problems or suggest new content
- Submit a pull request with improvements
Back to the repository