-
Notifications
You must be signed in to change notification settings - Fork 1
Set Up Local AI Service: Rasa (NLP)
For local services on desktops and laptops, the VHToolkit uses a local endpoint wrapped around an AI model, which are often developed on Linux with Python. These can run on Windows with the Windows Subsystem for Linux (WSL).
This tutorial shows how to set up a local NLP solution that uses Rasa. We run Rasa as a local endpoint Python server that the VHToolkit connects to. Note that this setup currently uses OpenAI ChatGPT as a cloud service fallback LLM.
- GitHub account
- Rasa Pro license key, to request from https://rasa.com/rasa-pro-developer-edition-license-key-request/
- OpenAI license key, currently required as a fallback; see Getting Started
- On Windows: Windows Subsystem for Linux (WSL)
See here how to set up WSL.
Open a command line (Windows key + R > type ‘cmd’) and type:
wsl ~
conda create -n nlp_rasa_env python=3.9
conda init
conda activate nlp_rasa_env
In the correct Conda environment ('conda activate nlp_rasa_env'), type:
git clone https://github.com/USC-ICT/rasa_vh
For a one time testing of using keys, and in the correct Conda environment ('conda activate nlp_rasa_env'), type:
export RASA_LICENSE=<your-rasa-key>
export OPENAI_API_KEY=<your-openai-key
To set these up as environment variables, add the above statements to your bash script (e.g., 'nano ~/.bashrc').
In the correct Conda environment ('conda activate nlp_rasa_env'), type:
cd rasa_vh
pip install uv
uv pip install rasa-pro --extra-index-url=https://europe-west3-python.pkg.dev/rasa-releases/rasa-pro-python/simple/
rasa train
In the correct Conda environment ('conda activate nlp_rasa_env') and in the correct folder ('cd rasa_vh'), type:
rasa run --enable-api --cors "*" --port 8080
- Make sure the local Rasa endpoint server is running following the instructions above
- In Unity, go to the Main debug menu
- Click Rasa to select the proper NLP system
- Talk to the character