This repository demonstrates how to build an A2A Protocol-compatible application using LangGraph with Multi-Channel Protocol (MCP) capabilities.
graph TD
A2AClient[A2A Client] -->|A2A Protocol| A2AAdapter[LangGraph A2A Adapter]
A2AAdapter -->|LangGraph Server API| Graph[LangGraph Agent Graph]
Graph -->|MCP| Tavily[Tavily Search]
subgraph "This Repository"
Graph
Tavily
end
style A2AClient fill:#e1e8ed,stroke:#333,stroke-width:1.5px
style A2AAdapter fill:#9aadc2,stroke:#333,stroke-width:1.5px
style Graph fill:#c2dcf2,stroke:#333,stroke-width:1.5px
style Tavily fill:#d3e0ea,stroke:#333,stroke-width:1.5px
LangGraph is a library for building stateful, multi-actor applications with LLMs. This example shows how to create a LangGraph application that is compatible with the A2A (Agent-to-Agent) Protocol, enabling it to communicate with any A2A-compatible client.
Key features:
- A2A Protocol Support: Seamless integration with A2A-compatible clients through the adapter
- Multi-Channel Protocol (MCP): Support for structured communication between AI systems using LangChain MCP Adapters
- Stateful Conversation: Built-in support for persistent state, checkpoints, and multi-step interactions
- Human-in-the-Loop: Capability for both autonomous operation and human collaboration
You'll need to set up two components:
- This LangGraph application - The actual agent implementation
- LangGraph A2A Adapter - Translates between A2A protocol and LangGraph API (GitHub repo)
-
Clone the repository:
git clone <repository-url> cd langgraph-a2a-mcp-example
-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r my_agent/requirements.txt
-
Create a
.envfile with your API keys:cp .env.example .env
Then add your API keys for Anthropic, Tavily, and OpenAI.
-
Clone the adapter repository:
git clone https://github.com/n-sviridenko/langgraph-a2a-adapter.git cd langgraph-a2a-adapter -
Follow the installation instructions in the adapter's README.
-
Create a
.envfile with the following configuration:# LangGraph Connection LANGGRAPH_API_URL=http://localhost:2024 # A2A Server Configuration A2A_PUBLIC_BASE_URL=http://localhost:8000 A2A_PORT=8000 # Agent Card Configuration AGENT_NAME="Weather Assistant" AGENT_DESCRIPTION="An AI assistant that provides weather information, forecasts, and related climate data." AGENT_VERSION=1.0.0 AGENT_SKILLS='[{"id":"weather_info","name":"Weather Information","description":"Get current weather conditions for any location","examples":["What\'s the weather like in New York?","Is it raining in London right now?"]},{"id":"weather_forecast","name":"Weather Forecast","description":"Get weather forecasts for upcoming days","examples":["What\'s the forecast for Tokyo this weekend?","Will it snow in Chicago next week?"]}]'
-
Start the LangGraph application:
langgraph dev
-
In a separate terminal, start the A2A adapter:
cd langgraph-a2a-adapter python main.py -
Connect any A2A-compatible client to the adapter at
http://localhost:8000. You can use the Google A2A Demo Web App for testing.
The A2A adapter provides:
- Agent discovery through standard A2A agent cards
- Message exchange with assistants
- Task management
- Streaming responses
- Push notifications for task updates
In order to deploy this agent to LangGraph Cloud you will want to first fork this repo. After that, you can follow the instructions here to deploy to LangGraph Cloud.
For the A2A Adapter, see the deployment instructions in its repository.
