Skip to content

Commit a73fc8a

Browse files
Update docs
- Delete Google Analytics - Update information up to date - Fix "site_url" in mkdocs.yml - Fix some visual issues
1 parent d393279 commit a73fc8a

File tree

5 files changed

+50
-43
lines changed

5 files changed

+50
-43
lines changed

docs/quickstart.md

Lines changed: 16 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -27,37 +27,32 @@
2727

2828
## ⚙️ Base Setup
2929

30-
1. Go to the [UnityNeuroSpeech GitHub repository](https://github.com/HardCodeDev777/UnityNeuroSpeech) and download the following **three `.rar` files** from the latest Release:
31-
- `UnityNeuroSpeech.X.X.X.rar` – main framework files
32-
- `default.venv.rar` – Python environment for the TTS server
33-
- `TTSModel.rar` – pretrained XTTS model
34-
35-
2. Extract all archives to the same directory. After extraction:
36-
- Inside `UnityNeuroSpeech/` you’ll find:
37-
- a `.unitypackage`
38-
- a `Server/` folder
39-
- a `run_server.bat` file
40-
⚠️ **Do not import the `Server` folder or `.bat` file into Unity. Keep them outside the project directory.**
41-
42-
3. Place your `.wav` voice files into `Server/Voices`.
43-
Each file must follow the naming pattern: `en_voice.wav`, `ru_voice.wav`, etc.
30+
1. Go to the [UnityNeuroSpeech GitHub repository](https://github.com/HardCodeDev777/UnityNeuroSpeech) and download the following **four files** from the latest Release:
31+
32+
- `UnityNeuroSpeech.X.X.X.rar` – main framework files
33+
- `default.venv.rar` – Python environment for the TTS server
34+
- `TTSModel.rar` – pretrained XTTS model
35+
- `Setup/` - folder with files for quick automatic setup
4436

45-
4. Move the extracted `.venv` folder (from `default.venv.rar`) into the `Server/` folder.
37+
2. Create a new empty folder anywhere on your computer (name it however you like).
38+
3. Drag all the following into that folder:
4639

47-
5. Move the extracted `TTSModel/` folder (from `TTSModel.rar`) into the `Server/` folder as well.
40+
- The entire `Setup/` folder (contents only)
41+
- The three `.rar` archives mentioned above
4842

49-
6. Import the `.unitypackage` into your Unity project.
43+
4. Run `RunPowershell.bat`
44+
45+
5. After setup finishes, you'll see a new folder `UnityNeuroSpeech X.X.X`. Open it and drag the `.unitypackage` into your Unity project. **Do not move or import the other files. Keep them outside the Unity project folder.**
46+
6. Place your `.wav` voice files into `Server/Voices`.
47+
Each file must follow the naming pattern: `en_voice.wav`, `ru_voice.wav`, etc.
5048

5149
7. In the `UnityNeuroSpeech` folder, you’ll see an empty `Whisper/` folder. Drop your Whisper `.bin` model file into it.
5250

51+
5352
> Some folders (like `Whisper/`) may contain `.txt` placeholder files.
5453
> These are only used to ensure Unity exports the folder. You can safely delete them after setup.
5554
5655
---
5756

58-
> You can also manually install your own Python environment and download the XTTS model separately.
59-
> But if you want everything to "just work" **without fighting with pip, PATH, or broken dependencies** — use the provided `.venv` and `TTSModel`.
60-
61-
---
6257

6358
**Done! You’re ready to build your first talking AI agent.**

docs/unity/agent-api.md

Lines changed: 17 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Easily trigger events based on your agent’s emotion, response count, or messag
88

99
## 🕹️ Handle Agent State
1010

11-
The Agent API is simple and elegant — **just 4 methods and 2 classes**.
11+
The Agent API is simple and elegant — **just 5 methods and 2 classes**.
1212

1313
To use `UnityNeuroSpeech Agent API` you need to:
1414
1. Create a new MonoBehaviour script.
@@ -19,6 +19,7 @@ Once you do that, you will need to implement three abstract methods:
1919
- Start
2020
- BeforeTTS
2121
- AfterTTS
22+
- AfterSTT
2223

2324
Also, you need to create a field with your `YourAgentNameController` type (in this example, `AlexController`). Your code will look like this:
2425

@@ -33,6 +34,8 @@ public class AlexBehaviour : AgentBehaviour
3334
public override void AfterTTS() {}
3435

3536
public override void BeforeTTS(AgentState state) {}
37+
38+
public override void AfterSTT() {}
3639

3740
public override void Start() {}
3841
}
@@ -44,6 +47,7 @@ public class AlexBehaviour : AgentBehaviour
4447

4548
- **AfterTTS** - Called after the audio is played.
4649
- **BeforeTTS** - Called before sending text to the TTS model.
50+
- **AfterSTT** - Called after STT model transcribed microphone input.
4751
- **Start** - Same as MonoBehaviour’s Start(), but required. Use it to bind your behaviour to an agent:
4852

4953
```csharp
@@ -66,6 +70,7 @@ public override void Start() => AgentManager.SetBehaviourToAgent(_alexAgentContr
6670
```csharp
6771
[HideInInspector] public Action<AgentState> BeforeTTS { get; set; }
6872
[HideInInspector] public Action AfterTTS { get; set; }
73+
[HideInInspector] public Action AfterSTT { get; set; }
6974
```
7075

7176
This lets UnityNeuroSpeech know when to call your methods at the right moments.
@@ -98,6 +103,8 @@ public class AlexBehaviour : AgentBehaviour
98103
else if (state.emotion == "sad") Debug.Log("AI is not happy...");
99104
}
100105
}
106+
107+
public override void AfterSTT() {}
101108

102109
public override void Start() => AgentManager.SetBehaviourToAgent(_alexAgentController, this);
103110
}
@@ -127,14 +134,12 @@ namespace UnityNeuroSpeech.Runtime
127134
{
128135
public Action<AgentState> BeforeTTS { get; set; }
129136
public Action AfterTTS { get; set; }
137+
public Action AfterSTT { get; set; }
130138
}
131139

132140
/// <summary>
133141
/// Base class to define agent behavior
134142
/// </summary>
135-
// For now it only supports pre/post-TTS hooks,
136-
// since I don't see much use for anything else (yet).
137-
// But future expansion is possible.
138143
public abstract class AgentBehaviour : MonoBehaviour
139144
{
140145
/// <summary>
@@ -150,9 +155,14 @@ namespace UnityNeuroSpeech.Runtime
150155
public abstract void BeforeTTS(AgentState state);
151156

152157
/// <summary>
153-
/// Called after receiving and playing the TTS response
158+
/// Called after receiving and playing the Text-To-Speech response
154159
/// </summary>
155160
public abstract void AfterTTS();
161+
162+
/// <summary>
163+
/// Called after Speech-To-Text transcription
164+
/// </summary>
165+
public abstract void AfterSTT();
156166
}
157167

158168
/// <summary>
@@ -199,8 +209,9 @@ public static class AgentManager
199209
/// <param name="beh">Behaviour to attach</param>
200210
public static void SetBehaviourToAgent<T>(T agent, AgentBehaviour beh) where T: MonoBehaviour, IAgent
201211
{
202-
agent.AfterTTS += beh.AfterTTS;
203212
agent.BeforeTTS += beh.BeforeTTS;
213+
agent.AfterTTS += beh.AfterTTS;
214+
agent.AfterSTT += beh.AfterSTT;
204215
}
205216
}
206217
```

docs/unity/configure-settings.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,13 @@ You’ll see the window with these settings:
1414

1515
#### 🧩 General Settings
1616

17-
| Setting | Description |
18-
|-------------------------------|--------------------------------------------------------------------------------------------------|
19-
| **Logging type** | Controls how much info you want to see in the Unity console. |
20-
| **Emotions** | Add at least one emotion. These are passed to the LLM. |
21-
| **Not in Assets folder** | Check this if you moved the framework folder outside the default location. |
17+
| Setting | Description |
18+
|------------------------------|------------------------------------------------------------------------------------------------|
19+
| **Logging type** | Controls how much info you want to see in the Unity console. |
20+
| **Emotions** | Add at least one emotion. These are passed to the LLM. |
21+
| **Not in Assets folder** | Check this if you moved the framework folder outside the default location. |
2222
| **Directory name** *(if above is checked)* | For example: if your folder path is `Assets/MyImports/Frameworks`, enter `MyImports/Frameworks`. |
23+
| **Custom Ollama URI** | If empty, Ollama URI will be default "localhost:11434" |
2324

2425
---
2526

docs/unity/creating-agent.md

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,18 +29,21 @@ You will see the window with these settings:
2929
2. Add a `Dropdown` with at least three options and a `Button`.
3030
3. Add an `AudioSource` to your scene.
3131
4. Create an empty `GameObject` and attach the following scripts:
32-
- `WhisperManager`
33-
- `MicrophoneRecord`
34-
- `YourAgentNameController`
35-
- `SetupWhisperPath`
32+
33+
- `WhisperManager`
34+
- `MicrophoneRecord`
35+
- `YourAgentNameController`
36+
- `SetupWhisperPath`
37+
3638
5. Configure the scripts:
3739

3840
#### 🔧 `WhisperManager`
3941
- Leave `Model Path` empty.
4042
- Turn off:
41-
- `Is Model Path In StreamingAssets`
42-
- `Init On Awake`
43-
- `Use VAD`
43+
44+
- `Is Model Path In StreamingAssets`
45+
- `Init On Awake`
46+
- `Use VAD`
4447
- Set `Language` to `auto`.
4548

4649
#### 🎙 `MicrophoneRecord`

mkdocs.yml

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
site_name: UnityNeuroSpeech
2-
site_url: https://hardcodedev777.github.io/unityneurospeech
2+
site_url: https://hardcodedev777.github.io/UnityNeuroSpeech
33
theme:
44
name: material
55
logo: media/logo.png
@@ -31,9 +31,6 @@ extra:
3131
social:
3232
- icon: fontawesome/brands/github
3333
link: https://github.com/HardCodeDev777/UnityNeuroSpeech
34-
analytics:
35-
provider: google
36-
tracking_id: G-X9H2RHNQ7G
3734
markdown_extensions:
3835
- pymdownx.highlight
3936
- pymdownx.superfences

0 commit comments

Comments
 (0)