A hands-on workshop exploring the fundamentals of AI agents, from simple LLM calls to intelligent systems that know when to use tools.
Note
If you attended the workshop, we'd love to hear your feedback on the workshop. Please fill out this form.
- Duration: 60 minutes
- Instructor: JP Hwang (GitHub | LinkedIn) - Senior Developer Educator at Weaviate
- Level: Intermediate
- Format: Live coding demo with follow-along exercises
By the end of this workshop, you'll understand:
- The difference between workflows and agentic systems
- When agents should (and shouldn't) use tools
- How to build production-ready agents with Weaviate integration
- Basic Python knowledge
- GitHub account (for Codespaces)
- Familiarity with APIs and async/await patterns
- Click the green "Code" button on this repo
- Select "Codespaces" → "Create codespace on main"
- Wait ~1-2 minutes for automatic environment setup
- Add your API keys to
.envfile (see below) - Run
python setup_check.pyto verify everything works - You're ready to go!
git clone https://github.com/weaviate-tutorials/weaviate-pydantic-ai-workshop.git
cd weaviate-pydantic-ai-workshop
uv sync
source .venv/bin/activatecp .env.example .env
# Add your API keys to .env (see below)
python setup_check.py # Verify setupAdd these to your .env file:
| Service | Get Key From | Required For |
|---|---|---|
| Weaviate Cloud | console.weaviate.cloud (free tier) | Steps 4-6 |
| Anthropic API | console.anthropic.com | All steps |
| Cohere API | dashboard.cohere.com (free tier) | Steps 4-6 |
During the workshop, temporary API keys for the APIs and Weaviate Cloud instance will be provided.
Tip
If prompted by VSCode for the kernel / environment, select the .venv environment at (.venv/bin/python).
From LLM calls to basic agents
step1_llm_call.ipynb- Simple LLM interactionstep2_basic_agent.ipynb- Adding a tool (weather lookup)- Key concept: Tools extend LLM capabilities
Demo: Ask "What's the weather in San Francisco?" and see the agent use the tool
Agentic Systems: Knowing When (and When NOT) to Use Tools
step3_tool_choice.ipynb- Agent with multiple tools- Demo: Three prompts showing selective tool use
- Weather question → Uses weather tool only
- News question → Uses news tool only
- Geography question → Uses NO tools (LLM knowledge)
Key insight: Good agents use tools when needed, not reflexively
Search the Weaviate docs
step4_weaviate_tools.py- Integrate Weaviate tools into an agent
- Agent chooses to use the tools when needed
Chatbot
step5_final_chatbot.py- Putting it all together- Agent that can:
- Answer questions from Weaviate documentation
- Make intelligent decisions about when to escalate further (e.g. contact human support)
Demo: Run the complete chatbot end-to-end
Next Steps & Resources
- Extending the agent (more tools, MCPs, streaming responses)
- Weaviate Query Agent (https://docs.weaviate.io/agents/query)
weaviate-pydantic-ai-workshop/
├── .devcontainer/
│ ├── devcontainer.json # GitHub Codespaces configuration
│ └── setup.sh # Automatic setup script
├── step1_llm_call.ipynb # Basic LLM interaction
├── step2_basic_agent.ipynb # Agent with tool
├── step3_tool_choice.ipynb # Demonstrate agent tool choice
├── step4_weaviate_tools.py # Show how to integrate Weaviate tools into an agent
├── step5_final_chatbot.py # Complete system
├── tools.py # Tools used in the workshop
├── setup_check.py # Verify your environment setup
├── .env.example # Template for API keys
├── pyproject.toml # Python dependencies
└── README.md
An agent is an LLM that can:
- Reason about a problem
- Decide which tools (if any) to use
- Execute actions based on its reasoning
- Iterate until the task is complete
| Workflow | Agent |
|---|---|
| Fixed sequence of steps | Dynamic decision-making |
| Always executes all tools | Uses tools only when needed |
| Predictable, rigid | Adaptive, flexible |
| Good for: Known processes | Good for: Varied user needs / complex tasks |
Use workflows when:
- Steps are always required
- Compliance/audit requirements
- Predictable inputs and outputs
- Example: Data processing pipelines
Use agents when:
- User intent varies widely
- Tools should be used conditionally
- Natural language interaction
- Example: Customer support, research assistants
MIT
Questions during the workshop? Drop them in chat - we'll address them as we go or in the Q&A at the end.