This guide will walk you through deploying the AI-Powered Ride Management System using Docker.
Before you begin, ensure you have:
-
Docker installed on your system
- Install Docker Desktop (Windows/Mac)
- Install Docker Engine (Linux)
- Verify installation:
docker --version
-
API Keys (all free tier available):
- Groq API Key (Required) - Get it from console.groq.com
- LangSmith API Key (Optional) - For monitoring/tracing: smith.langchain.com
docker pull your-dockerhub-username/ride-agent:latestOption A: Pass API Keys Directly (Quick Start)
docker run --rm -it \
-e GROQ_API_KEY=your_groq_api_key_here \
-e LANGSMITH_API_KEY=your_langsmith_key_here \
your-dockerhub-username/ride-agent:latestOption B: Use Environment File (Recommended for Production)
- Create a
.envfile in your working directory:
# .env file
GROQ_API_KEY=gsk_xxxxxxxxxxxxxxxxxxxxx
LANGSMITH_API_KEY=ls_xxxxxxxxxxxxxxxxxxxxx
LANGSMITH_TRACING_V2=true
LANGSMITH_PROJECT=ride-agent-production- Run with the env file:
docker run --rm -it \
--env-file .env \
your-dockerhub-username/ride-agent:latestOnce the container starts, you'll see a welcome message. You can now interact with the chatbot:
π Welcome to Ride Management Agent!
Type 'exit' to quit.
You: I need a ride from Times Square to JFK Airport
Agent: I'll help you book that ride...
git clone https://github.com/your-username/ride-agent.git
cd ride-agentCopy the example environment file and add your keys:
cp .env.example .env
nano .env # or use any text editorUpdate the .env file:
GROQ_API_KEY=your_groq_key_here
LANGSMITH_API_KEY=your_langsmith_key_here
LANGSMITH_TRACING_V2=true
LANGSMITH_PROJECT=my-ride-agentThe project includes multiple implementations:
- agenticV1/ - ReAct agent (recommended for learning)
- agenticV2/ - Instruction-based agent
- agenticV3/ - Enhanced V1 with improvements (recommended for production)
- workflow/ - Deterministic workflow system
Navigate to your chosen version:
cd agenticV3 # or agenticV1, agenticV2, workflowdocker build -t ride-agent:local .docker run --rm -it \
--env-file .env \
ride-agent:localFor development and testing without Docker:
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On Mac/Linux:
source venv/bin/activatecd agenticV3 # or your chosen version
pip install -r requirements.txtcp .env.example .env
# Edit .env with your API keyspython main.py| Variable | Required | Description | Default |
|---|---|---|---|
GROQ_API_KEY |
β Yes | Groq LLM API key | - |
LANGSMITH_API_KEY |
β No | LangSmith tracing key | - |
LANGSMITH_TRACING_V2 |
β No | Enable LangSmith tracing | false |
LANGSMITH_PROJECT |
β No | LangSmith project name | default |
You can specify different Groq models by modifying the configuration:
# In graph.py or main.py
model = "llama-3.3-70b-versatile" # Default
# or
model = "mixtral-8x7b-32768" # AlternativeAvailable models:
llama-3.3-70b-versatile(recommended)llama-3.1-70b-versatilemixtral-8x7b-32768gemma-7b-it
You: Book a ride from Central Park to Brooklyn Bridge
Expected output: Booking confirmation with details
You: Cancel booking BK123456
Expected output: Cancellation processed with fee information
You: What's your cancellation policy?
Expected output: Detailed policy from knowledge base
You: Show my active bookings
Expected output: List of current rides (if any)
Solution: Verify your .env file or environment variables are correctly set.
# Check if variable is set
echo $GROQ_API_KEY
# If using Docker, verify the env file path
docker run --rm -it --env-file /full/path/to/.env ride-agent:latestSolution: Ensure all dependencies are installed:
pip install -r requirements.txt --upgradeSolution: The RAG system needs to build the vector index on first run:
cd RAG
python indexing.pySolution: Run in interactive mode with -it flags and check logs:
docker run --rm -it your-dockerhub-username/ride-agent:latest
# or
docker logs container_idSolution: This is usually due to:
- Cold start of Groq API (first request is slower)
- Network latency
- Large document retrieval in RAG
Consider using a local embedding model or caching for faster responses.
To enable detailed tracing and monitoring:
- Sign up for LangSmith
- Create a new project
- Get your API key from settings
- Add to your
.env:
LANGSMITH_API_KEY=ls_xxxxxxxxxxxxxxxxxxxxx
LANGSMITH_TRACING_V2=true
LANGSMITH_PROJECT=my-ride-agent- View traces at smith.langchain.com
For visual debugging of agent flows:
- Install LangGraph Studio (separate application)
- Open your project in the studio
- View real-time graph execution
- Set breakpoints and inspect state
docker pull your-dockerhub-username/ride-agent:latestgit pull origin main
pip install -r requirements.txt --upgrade# List images
docker images | grep ride-agent
# Remove specific image
docker rmi ride-agent:latest
# Remove all unused images
docker image prune -adeactivate # if activated
rm -rf venvIf you encounter issues not covered here:
- Check the Issues page
- Review ARCHITECTURE.md for technical details
- Open a new issue with:
- Error message
- Steps to reproduce
- Your environment (OS, Docker version, Python version)
After successful setup:
- β Read ARCHITECTURE.md to understand the system design
- β Explore different agent versions (V1, V2, V3, workflow)
- β
Customize the system prompts in
system_prompts.txt - β Train your own cancellation clustering model with custom data
- β Extend the RAG knowledge base with your own documents
π Setup Complete! Ready to explore AI-powered ride management! π