-
Notifications
You must be signed in to change notification settings - Fork 1
Natural Language Processing (NLP)
The NLP system serves as an abstraction layer that provides a unified interface for interacting with multiple NLP services, including large language models (LLM). Its primary purpose is to simplify integration, enhance modularity, and allow seamless switching or concurrent use of different NLP backends.
RIDE Cognition package, \Runtime\Nlp:
- NlpSystemAnthropic
- NlpSystemAWSLex
- NlpSystemChatGPT
- NlpSystemUnity
INlpSystem (Interface):
- Defines the core contract for all NLP system implementations.
- Declares shared methods for sending requests and receiving responses.
- Specifies common data structures used across different LLM providers.
- Ensures consistency across different NLP backends.
NlpSystemUnity (Abstract Base Class):
- Implements the INlpSystem interface.
- Serves as a foundational class for all NLP service integrations.
- Provides shared logic such as:
- Request formatting and dispatch
- Logging and debugging utilities
- Basic response parsing
- Designed to reduce redundancy and promote reuse among derived classes.
Implementations (e.g., NlpSystemOpenAI, NlpSystemAWSLex, NlpSystemAzure):
- Inherit from NlpSystemUnity.
- Implement provider-specific communication with external LLM services.
- Override and extend base methods to handle:
- Authentication and API configuration
- Service-specific request/response formats
Project Setup:
- The provided NLP systems are in the 'Nlp' part of the RideSystemsCognition game object / prefab
- System configurations can be changed through the Debug menu > Main > NLP
Character Setup:
- Individual character configuration (e.g., LLM prompt, voice) are set per character in the VHCharacterProfile script
- Character configurations can be changed through the Debug menu > Main > NLP
Code: Each individual implementation of a specific NLP technology (e.g., ChatGPT) has it's own RIDE system (e.g., NlpSystemChatGPT). RIDE systems need to be initialized and configured, after which the most common actions are to send user input and receive system responses to be processed further. The NLP data structures for these are variations on NlpRequest and NlpResponse.
NlpSystemChatGPT m_chatGPTSystem;
m_chatGPTSystem = Systems.Get<NlpSystemChatGPT>();
m_chatGPTSystem.SetSystemPrompt(profile.llmPrompt);
m_chatGPTSystem.Request(new NlpRequest(q), QuestionResponse); // Send user input and process the response in QuestionResponse
Most provided NLP implementations are cloud services for which a key is required:
- OpenAI ChatGPT. In the Organization Settings of the API Platform, select the API Keys menu to create your key.
- [Anthropic Claude] (https://www.claude.com/platform/api). In the Claude Console, select the API Keys menu to create your key.
- AWS Lex V2. Follow the Getting Started documentation.
To add your own API keys to the VHToolkit, in Unity:
- Use the Debug menu at the top left and go to the Config sub menu
-
- Click Open Folder Location and open the ride.json file
- Add your keys to the appropriate sections:
- AzureSpeech
- OpenAIChatGPT
- AWSPolly
- Save the file
- Restart the Unity project
For local services on desktops and laptops, the VHToolkit uses a local endpoint wrapped around an AI model, which are often developed on Linux with Python. These can run on Windows with the Windows Subsystem for Linux (WSL). See here for instructions on how to set up Rasa as a local NLP service.
WebGL requires a proxy (e.g., AWS Lambda) to mitigate CORS issues.