Skip to content

Set Up Local AI Service: Rasa (NLP)

Arno Hartholt edited this page Dec 18, 2025 · 8 revisions

Purpose

For local services on desktops and laptops, the VHToolkit uses a local endpoint wrapped around an AI model, which are often developed on Linux with Python. These can run on Windows with the Windows Subsystem for Linux (WSL).

This tutorial shows how to set up a local NLP solution that uses Rasa. We run Rasa as a local endpoint Python server that the VHToolkit connects to. Note that this setup currently uses OpenAI ChatGPT as a cloud service fallback LLM.

Requirements

Installation and Setup

Install WSL

See here how to set up WSL.

Create a Conda Environment

Open a command line (Windows key + R > type ‘cmd’) and type:

wsl ~
conda create -n nlp_rasa_env python=3.9
conda init  
conda activate nlp_rasa_env

Clone the Rasa VH Configuration Code

In the correct Conda environment ('conda activate nlp_rasa_env'), type:

git clone https://github.com/USC-ICT/rasa_vh

Set Up License Key Environment Variables

For a one time testing of using keys, and in the correct Conda environment ('conda activate nlp_rasa_env'), type:

export RASA_LICENSE=<your-rasa-key>
export OPENAI_API_KEY=<your-openai-key

To set these up as environment variables, add the above statements to your bash script (e.g., 'nano ~/.bashrc').

Install Rasa

In the correct Conda environment ('conda activate nlp_rasa_env'), type:

cd rasa_vh
pip install uv
uv pip install rasa-pro --extra-index-url=https://europe-west3-python.pkg.dev/rasa-releases/rasa-pro-python/simple/
rasa train

Run Rasa

In the correct Conda environment ('conda activate nlp_rasa_env') and in the correct folder ('cd rasa_vh'), type:

rasa run --enable-api --cors "*" --port 8080

Test Rasa in VHToolkit Unity Sample Project

  • Make sure the local Rasa endpoint server is running following the instructions above
  • In Unity, go to the Main debug menu
  • Click Rasa to select the proper NLP system
  • Talk to the character

Clone this wiki locally