Skip to content

Commit a22d5fb

Browse files
2 parents 380060f + 93add23 commit a22d5fb

File tree

1 file changed

+82
-0
lines changed

1 file changed

+82
-0
lines changed

README.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
![Unity](https://img.shields.io/badge/Unity-unity?logo=Unity&color=%23000000)
2+
![C#](https://img.shields.io/badge/C%23-%23512BD4?logo=.NET)
3+
![Ollama](https://img.shields.io/badge/Ollama-%23000000?logo=Ollama)
4+
![License](https://img.shields.io/github/license/HardCodeDev777/UnityNeuroSpeech?color=%2305991d)
5+
![Last commit](https://img.shields.io/github/last-commit/HardCodeDev777/UnityNeuroSpeech?color=%2305991d)
6+
![Tag](https://img.shields.io/github/v/tag/HardCodeDev777/UnityNeuroSpeech)
7+
![Top lang](https://img.shields.io/github/languages/top/HardCodeDev777/UnityNeuroSpeech)
8+
9+
<div align="center">
10+
<img src="docs/media/logo.png">
11+
</div>
12+
13+
#
14+
15+
> **Make your Unity characters hear, think, and talk — using real voice AI. Locally. No cloud.**
16+
17+
---
18+
19+
UnityNeuroSpeech is a lightweight and open-source framework for creating **fully voice-interactive AI agents** inside Unity.
20+
It connects:
21+
22+
- 🧠 **Whisper** (STT) – converts your speech into text
23+
- 💬 **Ollama** (LLM) – generates smart responses
24+
- 🗣️ **XTTS** (TTS) – speaks back with *custom voice + emotions*
25+
26+
All locally. All offline.
27+
No subscriptions, no accounts, no OpenAI API keys.
28+
29+
---
30+
31+
## 🚀 What can you build with UnityNeuroSpeech?
32+
33+
- 🎮 AI characters that understand your voice and reply in real time
34+
- 🗿 NPCs with personality and memory
35+
- 🧪 Experiments in AI conversation and narrative design
36+
- 🕹️ Voice-driven gameplay mechanics
37+
- 🤖 Interactive bots with humanlike voice responses
38+
39+
---
40+
41+
## ✨ Core Features
42+
43+
| Feature | Description |
44+
|--------|--------------------------------------------------------------------------------------------|
45+
| 🎙️ **Voice Input** | Uses [whisper.unity](https://github.com/Macoron/whisper.unity) for accurate speech-to-text |
46+
| 🧠 **AI Brain (LLM)** | Easily connect to any local model via [Ollama](https://ollama.com) |
47+
| 🗣️ **Custom TTS** | Supports any voice with [Coqui XTTS](https://github.com/coqui-ai/TTS) |
48+
| 😄 **Emotions** | Emotion tags (`<happy>`, `<sad>`, etc.) parsed automatically from LLM |
49+
| 🎛️ **Agent API** | Subscribe to events like `BeforeTTS()` or access `AgentState` directly |
50+
| 🛠️ **Editor Tools** | Create, manage and customize agents inside Unity Editor |
51+
| 🧱 **No cloud** | All models and voice run locally on your machine |
52+
| 🌐 **Multilingual** | Works with over **15+ languages**, including English, Russian, Chinese, etc. |
53+
54+
---
55+
56+
## 🧪 Built with:
57+
58+
- 🧠 [`Microsoft.Extensions.AI`](https://learn.microsoft.com/en-us/dotnet/ai/) (Ollama)
59+
- 🎤 [`whisper.unity`](https://github.com/Macoron/whisper.unity)
60+
- 🐍 [Python Flask server](server/) (for TTS)
61+
- 🧊 [Coqui XTTS model](https://github.com/coqui-ai/TTS)
62+
- 🤖 Unity 6
63+
64+
---
65+
66+
## 📚 Get Started
67+
68+
See [UnityNeuroSpeech official website](https://hardcodedev777.github.io/unityneurospeech).
69+
70+
---
71+
72+
## 😎 Who made this?
73+
74+
UnityNeuroSpeech was created by [HardCodeDev](https://github.com/HardCodeDev777)
75+
indie dev from Russia who just wanted to make AI talk in Unity.
76+
77+
---
78+
79+
## 🗒️ License
80+
81+
UnityNeuroSpeech is licensed under the **MIT License**.
82+
For other Licenses, see [Licenses](docs/other/licenses.md).

0 commit comments

Comments
 (0)