🎉 Finally... UnityNeuroSpeech v2.0.0!
There are a lot of changes — but I’ll only mention the ones that really matter:
TTS
- No more Python! No more local servers! No more pain! UnityNeuroSpeech now uses the amazing and actively maintained Coqui XTTS fork by Idiap — and everything runs directly through the CLI. Yes, you’ll need to install UV, but it’s so much better than before.
Unity
- Added full support for multiple voices, languages, and agents
- Added dialog history saving (with or without AES encryption) between player and LLM
- Whisper models now work from
StreamingAssets/, meaning full Mono support. Unfortunately, IL2CPP currently not supported. I'm planning to add support for it in v2.1.0. - Removed
AgentStatestruct and addedSetJsonDialogHistorymethod - Added two powerful new Editor tools: Prompts Test and Decode Encoded
- Improved code comments, logs, and internal validation
Docs
- Simplified and cleaned up documentation — removed duplicate info already shown in Unity tooltips
Setup
- One beautiful
setup.batfile handles everything — it installs all required dependencies automatically 😎 No more 3 ZIPs or manual setup — just a.unitypackageand one script to run.
This version of UnityNeuroSpeech is ready for real, production-level games.
I spent a lot of time testing it — but let’s still hope there are no bugs! 🤠