My incremental journey learning to create agentic workflows with LangChain, LangGraph, and FastMCP.
-
LangChain: LangChain provides the building blocks for large language models (LLM) applications. It's a library of modular components like prompt templates, output parsers, memory systems, and tool integrations that can be composed together.
-
LangGraph: LangGraph is a framework built on top of LangChain for creating stateful, agentic workflows. Workflows are defined as a graph of nodes and edges. Nodes represent different steps or agents and edges represent the flow between them. LangGraph is useful for building complex LLM applications that need to maintain state, handle cycles, support human-in-the-loop interactions, or coordinate multiple agents working together.
-
FastMCP: The Model Context Protocol (MCP) is an open standard for connecting LLMs to external data sources, tools, and services through a unified interface. FastMCP is a high-level Python framework that simplifies MCP server creation with decorator-based syntax.
- Getting started
- Prompts and responses
- Intro to chains and LangChain Expression Language (LCEL)
- A simple LangGraph graph
- Create an agent
- Mock SciFi Writer Agent + LangSmith Studio example
- Locally running (private) agent with Ollama
- Agent tools
- ReAct agent in LangGraph
- Caching tool calls locally
- Handling expected errors during tool calls
- Limiting number of tool calls with recursion limits
- Dynamic tool filtering based on context
- SQL toolkit for agent
- Polars DataFrame toolkit for agent
- Short-term memory for chat conversations & agents
- ReAct agent with memory in LangGraph
- Agent context and state
- Managing long conversations
Using uv for Python environment management:
# Clone repository
git clone git@github.com:libertininick/chain-reaction.git
cd chain-reaction
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install dependencies
uv sync
# Configure API keys
cp template.env .env
# Edit .env and add your API keys
# Set up pre-commit hooks
uv run pre-commit installThis project requires an Anthropic or OpenAI API key:
- Get your API key:
- Configure your environment: Copy
template.envto.envand add your key:cp template.env .env
- Edit
.env: Replace<your api key here>with your actual API key
The .env file is gitignored to keep your API keys secure.
After completing the setup steps, run the getting-started.ipynb notebook to verify:
- Dependencies are installed correctly
- API key(s) are configured properly
- You can successfully generate responses from a chat model
The notebook demonstrates a simple LangChain workflow using ChatAnthropic to generate a poem.
-
Create LangSmith API key and add to
.envfileLANGSMITH_API_KEY="<your api key here>" -
Create an agent file:
some_agent.pyfrom langchain.agents import create_agent from langchain.chat_models import init_chat_model # Initialize model chat_model = init_chat_model(...) # Create an agent agent = create_agent( model=creative_model, ... )
-
Configure
langgraph.json{ "dependencies": ["."], "graphs": { "<some agent>": "path/to/some_agent.py:agent" }, "env": "./.env" } -
Run LangSmith Studio
uv run langgraph dev
This project uses pre-commit hooks managed via uv to maintain code quality:
- ruff: Linting and formatting
- pydoclint: Docstring validation
- nbstripout: Strip Jupyter notebook outputs
- ty: Type checking
All tools run automatically on commit. To run manually:
uv run pre-commit run --all-filesuv lock --upgrade-package <package name>
uv pip show <package name>- Update
uvtool - Upgrade
Pythonversion installed byuv - Upgrade all dependencies in
uv.lockfile - Sync virtual environment with updated dependencies
- Prune
uvcache to remove dependencies that are no longer needed
uv self update \
&& uv python upgrade \
&& uv lock --upgrade \
&& uv sync \
&& uv cache pruneuv run pre-commit install-hooks \
&& uv run pre-commit autoupdateLearning in public, one chain at a time. 🔗