This example demonstrates how to build an AI Healthcare Support Bot using PowerMem for intelligent memory management and LangChain for conversation handling, with OceanBase as the database backend.
- 🏥 Patient Memory Management: Persistent storage of patient information, symptoms, and medical history
- 🧠 Intelligent Fact Extraction: Automatic extraction of medical facts from conversations
- 💬 Context-Aware Responses: Personalized responses based on patient history
- 🔄 Multi-Turn Conversations: Support for continuous dialogue with context preservation
- 🔒 Privacy Protection: Patient data isolation through user_id
- 🚀 Scalable Storage: OceanBase database backend for enterprise-scale deployments
┌─────────────────┐
│ LangChain 1.1+ │ Conversation handling using LCEL (LangChain Expression Language)
│ (Runnable API) │ - ChatPromptTemplate
│ │ - Runnable chains
│ │ - Message management
└────────┬────────┘
│
▼
┌─────────────────┐
│ PowerMem │ Intelligent memory management
│ (Memory Layer) │ - Fact extraction
│ │ - Semantic search
│ │ - Context retrieval
└────────┬────────┘
│
▼
┌─────────────────┐
│ OceanBase │ Vector database for scalable storage
│ (Database) │ - Patient memories
│ │ - Medical history
│ │ - Symptom tracking
└─────────────────┘
- Python 3.11+
- OceanBase Database (configured and running)
- API Keys:
- LLM API key (OpenAI, Qwen, etc.)
- Embedding API key (if different from LLM)
Option 1: Install from requirements.txt (Recommended)
cd examples/langchain
pip install -r requirements.txtOption 2: Install manually
# Core dependencies
pip install powermem python-dotenv
# LangChain dependencies
pip install langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1
# OceanBase dependencies (if not already installed)
pip install pyobvector sqlalchemyOption 3: Install all at once
pip install powermem python-dotenv langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1 pyobvector sqlalchemyCopy the configuration template and edit it:
# From project root
cp .env.example .envEdit .env and configure:
# Database Configuration
DATABASE_PROVIDER=oceanbase
OCEANBASE_HOST=127.0.0.1
OCEANBASE_PORT=2881
OCEANBASE_USER=root@sys
OCEANBASE_PASSWORD=password
OCEANBASE_DATABASE=powermem
OCEANBASE_COLLECTION=healthcare_memories
# LLM Configuration
LLM_PROVIDER=qwen # or openai
LLM_API_KEY=your_llm_api_key
LLM_MODEL=qwen-plus # or gpt-3.5-turbo
# Embedding Configuration
EMBEDDING_PROVIDER=qwen # or openai
EMBEDDING_API_KEY=your_embedding_api_key
EMBEDDING_MODEL=text-embedding-v4
EMBEDDING_DIMS=1536Ensure your OceanBase instance is running and accessible:
# Test connection (adjust host/port as needed)
mysql -h 127.0.0.1 -P 2881 -u root -pRun a predefined conversation demonstration:
cd examples/langchain
python healthcare_support_bot.py --mode demoThis will:
- Initialize the bot with OceanBase
- Run through a sample patient conversation
- Demonstrate memory storage and retrieval
- Show patient information summary
Run the bot in interactive mode for real-time conversations:
cd examples/langchain
python healthcare_support_bot.py --mode interactiveInteractive Commands:
- Type your message to chat with the bot
- Type
summaryto see patient information summary - Type
quitorexitto end the conversation
Specify a patient ID for the conversation:
python healthcare_support_bot.py --mode interactive --patient-id patient_john_001The HealthcarePowerMemMemory class integrates PowerMem with LangChain 1.1.0+ using the new API:
- Message Management: Manages conversation history as a list of
BaseMessageobjects - Save Context: Automatically saves conversations to PowerMem with intelligent fact extraction
- Load Context: Retrieves relevant patient history based on current query
- Privacy: Isolates patient data by
user_id
The implementation uses LangChain's new Runnable API instead of the legacy ConversationBufferMemory class.
PowerMem automatically extracts medical facts from conversations:
- Symptoms: Headaches, pain, discomfort, etc.
- Medications: Current prescriptions, dosages
- Medical History: Past conditions, diagnoses
- Patient Information: Demographics, preferences
The bot uses retrieved patient context to provide personalized responses. The implementation uses LangChain's ChatPromptTemplate and RunnableLambda to dynamically inject patient context:
# Patient context is automatically retrieved and included in prompts
Patient Context (from previous conversations):
- Patient Alice has been experiencing headaches
- Headaches occur in the afternoon, moderate intensity
- Currently taking ibuprofen 200mg twice daily
- Has history of migrainesThe conversation chain is built using LangChain Expression Language (LCEL):
chain = (
RunnableLambda(format_messages) # Retrieve patient context
| ChatPromptTemplate # Format prompt with context
| ChatOpenAI # Generate response
)All patient memories are stored in OceanBase with:
- Vector Embeddings: For semantic search
- Metadata: Category, timestamps, importance scores
- Scalability: Handles large-scale patient data
Patient: Hello, I'm Alice. I've been experiencing headaches for the past few days.
Bot: Hello Alice, I'm sorry to hear about your headaches. I'm here to help provide
general guidance. Can you tell me more about when these headaches typically occur
and their intensity?
Patient: The headaches usually happen in the afternoon, and they're moderate in intensity.
Bot: Thank you for that information. Afternoon headaches can have various causes.
Are you experiencing any other symptoms along with the headaches?
Patient: I'm currently taking ibuprofen 200mg twice daily for the pain.
Bot: I understand you're managing the pain with ibuprofen. It's important to follow
the recommended dosage. Have you consulted with a healthcare professional about
these headaches?
[Memory stored: Patient Alice experiences afternoon headaches, moderate intensity,
taking ibuprofen 200mg twice daily]
The bot can provide a summary of stored patient information:
summary = bot.get_patient_summary()
# Returns:
# {
# "total_memories": 15,
# "symptom_mentions": 8,
# "medication_mentions": 3,
# "history_mentions": 4,
# "recent_memories": [...]
# }DATABASE_PROVIDER: Set tooceanbaseOCEANBASE_HOST: OceanBase server hostnameOCEANBASE_PORT: OceanBase port (default: 2881)OCEANBASE_DATABASE: Database nameOCEANBASE_COLLECTION: Collection/table name for memories
LLM_PROVIDER:qwen,openai, or other supported providersLLM_MODEL: Model name (e.g.,qwen-plus,gpt-3.5-turbo)LLM_TEMPERATURE: Response creativity (0.0-1.0)
EMBEDDING_PROVIDER: Embedding model providerEMBEDDING_MODEL: Embedding model nameEMBEDDING_DIMS: Vector dimensions (must match model)
Problem: Cannot connect to OceanBase
Solution:
- Verify OceanBase is running:
mysql -h 127.0.0.1 -P 2881 -u root -p - Check configuration in
.env - Verify network connectivity and firewall settings
Problem: ModuleNotFoundError: No module named 'langchain'
Solution:
pip install langchain>=1.1.0 langchain-core>=1.1.0 langchain-openai>=1.1.0 langchain-community>=0.4.1Problem: LLM or embedding API errors
Solution:
- Verify API keys in
.env - Check API key validity and quotas
- Ensure correct provider is configured
Problem: Conversations not being stored
Solution:
- Check OceanBase connection
- Verify
infer=Trueis set insave_to_powermemmethod - Check database permissions
- Review error messages in console
- Patient Privacy: Always use unique
user_idfor each patient - Data Security: Encrypt sensitive medical information
- Regular Backups: Backup OceanBase database regularly
- Monitoring: Monitor memory usage and database performance
- Compliance: Ensure compliance with healthcare data regulations (HIPAA, etc.)
- The bot provides general health information only
- Always recommend consulting healthcare professionals
- Never diagnose medical conditions
- Not a replacement for professional medical advice
- Basic Usage - Simple memory operations
- Agent Memory - Multi-agent memory management
- Intelligent Memory - Advanced memory features
For issues or questions:
- Check the main README
- Review PowerMem documentation
- Open an issue on GitHub