A conversational recommendation system powered by a three-agent AI crew that collaborates in real-time to deliver personalised film suggestions. Built with CrewAI, GroqCloud, Redis, and Streamlit.
- Overview
- Architecture
- Agent Pipeline
- Tech Stack
- Project Structure
- Prerequisites
- Installation
- Configuration
- Running the App
- Using the Notebook
- How It Works
- Dataset
The MultiAgent Movie Recommender accepts a natural language query (e.g. "feel-good comedies", "mind-bending sci-fi") and passes it through a sequential pipeline of three specialised AI agents. Each agent has a distinct role β analysing preferences, matching candidate films, and generating a personalised reply β before the final recommendation is streamed back to the user through a polished chat interface.
Chat context is persisted in Redis so the system remembers earlier messages within a session, enabling follow-up queries like "something similar but more recent".
User Input (Streamlit Chat)
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Workflow Graph (LangGraph) β
β β
β β
β β
β β
β βββββββββββββββββββββββββββββββββββββββββββ β
β β Node: "run_crew" β β
β β β β
β β βββββββββββββββββββββββββββββββββββββ β β
β β β Agent Orchestration (CrewAI) β β β
β β β β β β
β β β βββββββββββββββ ββββββββββββββ β β β
β β β β Preference βββΆβ Movie β β β β
β β β β Analyst β β Matcher β β β β
β β β β (Agent 1) β β (Agent 2) β β β β
β β β βββββββββββββββ βββββββ¬βββββββ β β β
β β β β β β β
β β β βββββββββββΌβββββββ β β β
β β β β Recommendation β β β β
β β β β Generator β β β β
β β β β (Agent 3) β β β β
β β β βββββββββββ¬βββββββ β β β
β β βββββββββββββββββββββββββββΌββββββββββ β β
β βββββββββββββββββββββββββββββββββββββββββββ β
β β result β
β END β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββΌβββββββββββββββββ
β Redis (Cloud / Local) β
β β’ Vector DB (embeddings) β
β β’ LLM Cache β
β β’ Chat Message History β
βββββββββββββββββββββββββββββββββββ
| # | Agent | Role | Key Responsibility |
|---|---|---|---|
| 1 | Preference Analyst | Understands the user | Parses the query + chat history to build a detailed taste profile (genres, themes, mood) |
| 2 | Movie Matcher | Searches the catalogue | Uses semantic similarity search against the Redis vector store to surface the best candidate films |
| 3 | Recommendation Generator | Crafts the reply | Ranks candidates by relevance and writes a personalised, conversational recommendation with reasons |
All three agents share the Movie Database Lookup tool β a CrewAI @tool that performs vector similarity search over the 3,000-title MovieLens index stored in Redis.
| Layer | Technology |
|---|---|
| LLM | Groq β llama-3.3-70b-versatile |
| Agent orchestration | CrewAI |
| Workflow graph | LangGraph (StateGraph) |
| Embeddings | HuggingFace Inference API β sentence-transformers/all-MiniLM-L6-v2 |
| Vector DB | Redis (via langchain-redis) |
| Chat memory | RedisChatMessageHistory (langchain-redis) |
| UI | Streamlit |
| LLM proxy | litellm (CrewAI dependency) |
| Dataset | MovieLens ml-latest-small |
multiagent/
βββ streamlit_app.py # Streamlit web application (main entry point)
βββ multiagent.ipynb # Jupyter notebook (exploration & prototyping)
βββ credentials.py # API keys and Redis connection details
βββ requirements.txt # Python dependencies
βββ ml-latest-small/ # MovieLens dataset
βββ movies.csv
- Python 3.10+
- A free Groq API key
- A free HuggingFace account & token
- A Redis instance β Redis Cloud
# 1. Clone / download the project
cd multiagent
# 2. Create a virtual environment (recommended)
python -m venv .venv
.venv\Scripts\activate # Windows
# source .venv/Scripts/activate # macOS / Linux
# 3. Install dependencies
pip install -r requirements.txtCreate a file called credentials.py in the multiagent/ directory:
# credentials.py
GROQ_API_KEY = "gsk_..." # Groq API key
HF_TOKEN = "hf_..." # HuggingFace token
REDIS_HOST = "your-redis-host" # e.g. redis-12345.c1.us-east-1-1.ec2.cloud.redislabs.com
REDIS_PORT = 12345 # your Redis port (integer)
REDIS_PASSWORD = "your-password" # Redis password
REDIS_URL = f"redis://default:{REDIS_PASSWORD}@{REDIS_HOST}:{REDIS_PORT}"streamlit run streamlit_app.pyThe app will open at http://localhost:8501.
On first run it will:
- Connect to Redis and verify the connection.
- Download the
sentence-transformers/all-MiniLM-L6-v2embedding model via the HuggingFace Inference API. - Read
ml-latest-small/movies.csv, embed the first 3,000 titles, and index them in Redis. - Initialise the three CrewAI agents and the Groq LLM.
Subsequent runs re-use the cached resources (Streamlit @st.cache_resource), so startup is much faster.
Open multiagent.ipynb in VS Code or JupyterLab to explore the system interactively:
jupyter lab multiagent.ipynbThe notebook walks through:
- Installing / verifying package versions
- Connecting to Redis
- Loading and embedding the MovieLens dataset
- Defining the three agents and their tasks
- Assembling the
Crew - Wrapping the crew in a LangGraph
StateGraph - Running an interactive terminal-based recommendation loop
The first 3,000 movie titles from movies.csv are embedded with sentence-transformers/all-MiniLM-L6-v2 (via the HuggingFace Inference API β no local GPU required) and stored in a Redis SearchIndex under the key movie_recommendations.
The user types a free-form request in the Streamlit chat input, or clicks one of eight pre-built suggestion chips (e.g. "π Sci-fi adventures").
kickoff(inputs={"user_input": "...", "chat_history": [...]})
CrewAI runs the three agents sequentially:
- Preference Analyst examines the query and recent chat turns to extract genres, themes, and mood signals.
- Movie Matcher invokes the
Movie Database Lookuptool (Redis similarity search) and surfaces the top candidates. - Recommendation Generator ranks those candidates and returns a conversational reply with a reason for each pick.
While the crew is running, the Streamlit UI shows an inline live progress view with:
- A green block for each completed agent task (first 400 characters of output).
- A blue block for the current agent step / tool call (first 300 characters).
Once the crew finishes, the live cards are replaced by a collapsed π§ Agent reasoning expander showing the full output of every completed task.
Every user message and AI reply is appended to RedisChatMessageHistory. On the next query the full history is serialised and sent to the crew so the agents can maintain conversational context.
MovieLens Small by GroupLens Research:
| File | Description |
|---|---|
movies.csv |
9,742 movies with title and genres |
ratings.csv |
100,836 ratings from 610 users |
tags.csv |
User-applied tags |
links.csv |
TMDb / IMDb identifiers |
Only the first 3,000 rows of movies.csv are indexed into the vector store to stay within Redis free-tier memory limits. You can adjust this in your code:
sample_df = movies_df.head(3000) # β change to index more titles if your Redis plan allows