|
| 1 | + |
| 2 | + |
| 3 | + |
| 4 | + |
| 5 | + |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +<div align="center"> |
| 10 | + <img src="docs/media/logo.png"> |
| 11 | +</div> |
| 12 | + |
| 13 | +# |
| 14 | + |
| 15 | +> **Make your Unity characters hear, think, and talk — using real voice AI. Locally. No cloud.** |
| 16 | +
|
| 17 | +--- |
| 18 | + |
| 19 | +UnityNeuroSpeech is a lightweight and open-source framework for creating **fully voice-interactive AI agents** inside Unity. |
| 20 | +It connects: |
| 21 | + |
| 22 | +- 🧠 **Whisper** (STT) – converts your speech into text |
| 23 | +- 💬 **Ollama** (LLM) – generates smart responses |
| 24 | +- 🗣️ **XTTS** (TTS) – speaks back with *custom voice + emotions* |
| 25 | + |
| 26 | +All locally. All offline. |
| 27 | +No subscriptions, no accounts, no OpenAI API keys. |
| 28 | + |
| 29 | +--- |
| 30 | + |
| 31 | +## 🚀 What can you build with UnityNeuroSpeech? |
| 32 | + |
| 33 | +- 🎮 AI characters that understand your voice and reply in real time |
| 34 | +- 🗿 NPCs with personality and memory |
| 35 | +- 🧪 Experiments in AI conversation and narrative design |
| 36 | +- 🕹️ Voice-driven gameplay mechanics |
| 37 | +- 🤖 Interactive bots with humanlike voice responses |
| 38 | + |
| 39 | +--- |
| 40 | + |
| 41 | +## ✨ Core Features |
| 42 | + |
| 43 | +| Feature | Description | |
| 44 | +|--------|--------------------------------------------------------------------------------------------| |
| 45 | +| 🎙️ **Voice Input** | Uses [whisper.unity](https://github.com/Macoron/whisper.unity) for accurate speech-to-text | |
| 46 | +| 🧠 **AI Brain (LLM)** | Easily connect to any local model via [Ollama](https://ollama.com) | |
| 47 | +| 🗣️ **Custom TTS** | Supports any voice with [Coqui XTTS](https://github.com/coqui-ai/TTS) | |
| 48 | +| 😄 **Emotions** | Emotion tags (`<happy>`, `<sad>`, etc.) parsed automatically from LLM | |
| 49 | +| 🎛️ **Agent API** | Subscribe to events like `BeforeTTS()` or access `AgentState` directly | |
| 50 | +| 🛠️ **Editor Tools** | Create, manage and customize agents inside Unity Editor | |
| 51 | +| 🧱 **No cloud** | All models and voice run locally on your machine | |
| 52 | +| 🌐 **Multilingual** | Works with over **15+ languages**, including English, Russian, Chinese, etc. | |
| 53 | + |
| 54 | +--- |
| 55 | + |
| 56 | +## 🧪 Built with: |
| 57 | + |
| 58 | +- 🧠 [`Microsoft.Extensions.AI`](https://learn.microsoft.com/en-us/dotnet/ai/) (Ollama) |
| 59 | +- 🎤 [`whisper.unity`](https://github.com/Macoron/whisper.unity) |
| 60 | +- 🐍 [Python Flask server](server/) (for TTS) |
| 61 | +- 🧊 [Coqui XTTS model](https://github.com/coqui-ai/TTS) |
| 62 | +- 🤖 Unity 6 |
| 63 | + |
| 64 | +--- |
| 65 | + |
| 66 | +## 📚 Get Started |
| 67 | + |
| 68 | +See [UnityNeuroSpeech official website](https://hardcodedev777.github.io/unityneurospeech). |
| 69 | + |
| 70 | +--- |
| 71 | + |
| 72 | +## 😎 Who made this? |
| 73 | + |
| 74 | +UnityNeuroSpeech was created by [HardCodeDev](https://github.com/HardCodeDev777) — |
| 75 | +indie dev from Russia who just wanted to make AI talk in Unity. |
| 76 | + |
| 77 | +--- |
| 78 | + |
| 79 | +## 🗒️ License |
| 80 | + |
| 81 | +UnityNeuroSpeech is licensed under the **MIT License**. |
| 82 | +For other Licenses, see [Licenses](docs/other/licenses.md). |
0 commit comments