This example shows you how to build a chat application with persistent memory using GenSX. We'll use OpenAI's GPT-4o-mini model and store chat history in GenSX Cloud blob storage.
The ChatMemoryWorkflow takes a threadId and a message as inputs. Each chat thread maintains its own conversation history, enabling context-aware responses across multiple interactions.
Here's what happens when you run the workflow:
- The system loads any existing chat history for your specified thread
- Your message and chat history are processed using GPT-4o-mini
- The updated conversation history is saved
- The assistant's response is displayed
The workflow uses:
@gensx/corefor workflow management@gensx/openaifor OpenAI integration@gensx/storagefor persistent chat history storage
-
Log in to GenSX (if you haven't already):
npx gensx login
-
Install the required dependencies:
pnpm install
-
Set up your environment variables:
export OPENAI_API_KEY=your_api_key_here
To run the workflow in GenSX Cloud:
-
Deploy your workflow:
pnpm run deploy
-
Start a conversation by calling the workflow:
gensx run ChatMemoryWorkflow --input '{"threadId": "thread-1", "userInput": "What is the capital of France?"}' -
Continue the conversation by using the same
threadId:gensx run ChatMemoryWorkflow --input '{"threadId": "thread-1", "userInput": "Tell me more about its history"}'
Once deployed, you can go to the GenSX console to see your workflow, test it, analyze traces, and get code snippets.
You can run the workflow directly using the src/index.tsx file:
pnpm dev thread-1 "What is the capital of France?"You can also test the workflow through a local API server:
pnpm startThis will start a local API server and you can call the workflow APIs via curl or any HTTP client:
curl -X POST http://localhost:1337/workflows/ChatMemoryWorkflow \
-H "Content-Type: application/json" \
-d '{
"threadId": "thread-1",
"message": "Hello, how are you?"
}'A swagger UI will also be available at http://localhost:1337/swagger-ui to view the API details and test the workflow.