Skip to content

Commit 4860b3b

Browse files
committed
📝 docs(readme): update documentation for LangGraph v0.6.6 integration and model configuration
- Clarify seamless integration with LangGraph Studio and update README badges - Add LangGraph v0.6.6 features highlighting context-driven configuration API changes - Provide detailed model configuration methods: runtime context, env vars, Studio assistant - Expand API key setup instructions for OpenAI, Anthropic, Qwen, and OpenAI-compatible providers - Introduce sample LangSmith execution traces with DeepWiki and web search examples - Update MCP server examples to include Context7 documentation tools with usage tips - Improve Chinese readme with corresponding updates and new configuration guidance - Refine project descriptions and badge links for both English and Chinese docs
1 parent 3153af7 commit 4860b3b

File tree

3 files changed

+229
-174
lines changed

3 files changed

+229
-174
lines changed

‎CLAUDE.md‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ make dev_ui # Start LangGraph development server with UI
8282

8383
## LangGraph Studio Integration
8484

85-
This project is designed for LangGraph Studio. The `langgraph.json` config file defines:
85+
This project works seamlessly with LangGraph Studio. The `langgraph.json` config file defines:
8686
- Graph entry point: `./src/react_agent/graph.py:graph`
8787
- Environment file: `.env`
8888
- Dependencies: current directory (`.`)

‎README.md‎

Lines changed: 114 additions & 91 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# LangGraph ReAct Agent Template
22

33
[![Version](https://img.shields.io/badge/version-v0.1.0-blue.svg)](https://github.com/webup/langgraph-up-react)
4+
[![LangGraph](https://img.shields.io/badge/LangGraph-v0.6.6-blue.svg)](https://github.com/langchain-ai/langgraph)
45
[![Build](https://github.com/webup/langgraph-up-react/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/webup/langgraph-up-react/actions/workflows/unit-tests.yml)
56
[![License](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
6-
[![README EN](https://img.shields.io/badge/README-English-blue.svg)](./README.md)
77
[![README CN](https://img.shields.io/badge/README-中文-red.svg)](./README_CN.md)
88
[![DeepWiki](https://img.shields.io/badge/Powered_by-DeepWiki-blue.svg)](https://deepwiki.com/webup/langgraph-up-react)
99
[![Twitter](https://img.shields.io/twitter/follow/zhanghaili0610?style=social)](https://twitter.com/zhanghaili0610)
1010

11-
This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
11+
This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), works seamlessly with [LangGraph Studio](https://docs.langchain.com/langgraph-platform/quick-start-studio#use-langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
1212

1313
![Graph view in LangGraph studio UI](./static/studio_ui.png)
1414

@@ -18,28 +18,30 @@ The core logic, defined in `src/react_agent/graph.py`, demonstrates a flexible R
1818

1919
## Features
2020

21-
### Dynamic Tool Loading
22-
- **Web Search**: Built-in Tavily search integration
23-
- **Documentation Tools**: DeepWiki MCP integration for GitHub repository documentation
24-
- **Extensible**: Easy to add custom tools via `src/common/tools.py`
25-
2621
### Multi-Provider Model Support
22+
- **Qwen Models**: Complete Qwen series support via `langchain-qwq` package, including Qwen-Plus, Qwen-Turbo, QwQ-32B, QvQ-72B
2723
- **OpenAI**: GPT-4o, GPT-4o-mini, etc.
2824
- **OpenAI-Compatible**: Any provider supporting OpenAI API format via custom API key and base URL
2925
- **Anthropic**: Claude 4 Sonnet, Claude 3.5 Haiku, etc.
30-
- **Qwen**: Qwen-Plus, Qwen-Turbo, QwQ-32B, QvQ-72B with regional API support
3126

32-
### MCP Integration
33-
- **Model Context Protocol**: Dynamic external tool loading at runtime
34-
- **DeepWiki MCP Server**: Repository documentation access and Q&A capabilities
35-
- **Caching**: Optimized performance with client and tools caching
36-
- **Configurable**: Enable via environment variables or context parameters
27+
### Agent Tool Integration Ecosystem
28+
- **Model Context Protocol (MCP)**: Dynamic external tool loading at runtime
29+
- **DeepWiki MCP Server**: Optional MCP tools for GitHub repository documentation access and Q&A capabilities
30+
- **Web Search**: Built-in traditional LangChain tools (Tavily) for internet information retrieval
31+
32+
### LangGraph v0.6 Features
33+
34+
> [!NOTE]
35+
> **New in LangGraph v0.6**: [LangGraph Context](https://docs.langchain.com/oss/python/context#context-overview) replaces the traditional `config['configurable']` pattern. Runtime context is now passed to the `context` argument of `invoke/stream`, providing a cleaner and more intuitive way to configure your agents.
3736
38-
### Comprehensive Testing
39-
- **70+ Test Cases**: Unit, integration, and end-to-end testing
40-
- **MCP Integration Coverage**: Full testing of DeepWiki tool loading and execution
41-
- **ReAct Loop Validation**: Tests verify proper tool-model interactions
42-
- **Mock Support**: Reliable testing without external API calls
37+
- **Context-Driven Configuration**: Runtime context passed via `context` parameter instead of `config['configurable']`
38+
- **Simplified API**: Cleaner interface for passing runtime configuration to your agents
39+
- **Backward Compatibility**: Gradual migration path from the old configuration pattern
40+
41+
### LangGraph Platform Development Support
42+
- **Local Development Server**: Complete LangGraph Platform development environment
43+
- **70+ Test Cases**: Unit, integration, and end-to-end testing coverage with complete DeepWiki tool loading and execution testing
44+
- **ReAct Loop Validation**: Ensures proper tool-model interactions
4345

4446
## What it Does
4547

@@ -53,6 +55,13 @@ The ReAct agent:
5355

5456
The agent comes with web search capabilities and optional DeepWiki MCP documentation tools, but can be easily extended with custom tools to suit various use cases.
5557

58+
### Sample Execution Traces
59+
60+
See these LangSmith traces to understand how the agent works in practice:
61+
62+
- **[DeepWiki Documentation Query](https://smith.langchain.com/public/d0594549-7363-46a7-b1a2-d85b55aaa2bd/r)** - Shows agent using DeepWiki MCP tools to query GitHub repository documentation
63+
- **[Web Search Query](https://smith.langchain.com/public/6ce92fd2-c0e4-409b-9ce2-02499ae16800/r)** - Demonstrates Tavily web search integration and reasoning loop
64+
5665
## Getting Started
5766

5867
### Setup with uv (Recommended)
@@ -88,75 +97,61 @@ cp .env.example .env
8897
# Required: Web search functionality
8998
TAVILY_API_KEY=your-tavily-api-key
9099

91-
# Model providers (choose at least one)
100+
# Required: If using Qwen models (default)
101+
DASHSCOPE_API_KEY=your-dashscope-api-key
102+
103+
# Optional: OpenAI model service platform keys
92104
OPENAI_API_KEY=your-openai-api-key
105+
# Optional: If using OpenAI-compatible service platforms
106+
OPENAI_API_BASE=your-openai-base-url
107+
108+
# Optional: If using Anthropic models
93109
ANTHROPIC_API_KEY=your-anthropic-api-key
94-
DASHSCOPE_API_KEY=your-dashscope-api-key # For Qwen models
95110

96-
# Optional: Regional API support for Qwen models
97-
REGION=international # or 'prc' for China mainland
111+
# Optional: Regional API support for Qwen models
112+
REGION=international # or 'prc' for China mainland (default)
98113

99-
# Optional: Enable documentation tools
114+
# Optional: Always enable DeepWiki documentation tools
100115
ENABLE_DEEPWIKI=true
101116
```
102117

103118
The primary [search tool](./src/common/tools.py) uses [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).
104119

105-
## Model Setup
106-
107-
The default model configuration is:
120+
## Model Configuration
108121

109-
```yaml
110-
model: qwen:qwen-turbo
111-
```
122+
The template uses `qwen:qwen-turbo` as the default model, defined in [`src/common/context.py`](./src/common/context.py). You can configure different models in three ways:
112123

113-
### OpenAI
124+
1. **Runtime Context** (recommended for programmatic usage)
125+
2. **Environment Variables**
126+
3. **LangGraph Studio Assistant Configuration**
114127

115-
To use OpenAI's chat models:
128+
### API Key Setup by Provider
116129

117-
1. Sign up for an [OpenAI API key](https://platform.openai.com/signup)
118-
2. Add it to your `.env` file:
130+
#### OpenAI
119131
```bash
120-
OPENAI_API_KEY=your-api-key
132+
OPENAI_API_KEY=your-openai-api-key
121133
```
134+
Get your API key: [OpenAI Platform](https://platform.openai.com/api-keys)
122135

123-
### Anthropic
124-
125-
To use Anthropic's chat models:
126-
127-
1. Sign up for an [Anthropic API key](https://console.anthropic.com/)
128-
2. Add it to your `.env` file:
136+
#### Anthropic
129137
```bash
130-
ANTHROPIC_API_KEY=your-api-key
138+
ANTHROPIC_API_KEY=your-anthropic-api-key
131139
```
132-
3. Update the model in LangGraph Studio to `anthropic:claude-3-5-sonnet-20240620`
133-
134-
### Qwen Models (Default)
140+
Get your API key: [Anthropic Console](https://console.anthropic.com/)
135141

136-
For Alibaba's Qwen models (Qwen3, QwQ-32B, etc.):
137-
138-
1. Sign up for a [DashScope API key](https://dashscope.console.aliyun.com/)
139-
2. Add it to your `.env` file:
142+
#### Qwen Models (Default)
140143
```bash
141-
DASHSCOPE_API_KEY=your-api-key
144+
DASHSCOPE_API_KEY=your-dashscope-api-key
142145
REGION=international # or 'prc' for China mainland
143146
```
144-
3. Update the model in LangGraph Studio to `qwen:qwen3-32b` or `qwen:qwen-plus`
145-
146-
### OpenAI-Compatible Providers
147-
148-
For any OpenAI-compatible API (SiliconFlow, Together AI, Groq, etc.):
147+
Get your API key: [DashScope Console](https://dashscope.console.aliyun.com/)
149148

150-
1. Get your API key from the provider
151-
2. Add to your `.env` file:
149+
#### OpenAI-Compatible Providers
152150
```bash
153-
# Example for custom OpenAI-compatible provider
154151
OPENAI_API_KEY=your-provider-api-key
155152
OPENAI_API_BASE=https://your-provider-api-base-url/v1
156153
```
157-
3. Update the model in LangGraph Studio to `openai:provider-model-name`
158-
159-
This flexible architecture allows you to use any OpenAI-compatible API by simply providing the API key and base URL.
154+
Supports SiliconFlow, Together AI, Groq, and other OpenAI-compatible APIs.
160155

161156
## How to Customize
162157

@@ -181,50 +176,78 @@ MCP_SERVERS = {
181176
"url": "https://mcp.deepwiki.com/mcp",
182177
"transport": "streamable_http",
183178
},
184-
"your_mcp_server": { # Add your MCP server
185-
"url": "https://your-mcp-server.com/mcp",
186-
"transport": "streamable_http",
187-
}
179+
# Example: Context7 for library documentation
180+
"context7": {
181+
"url": "https://mcp.context7.com/sse",
182+
"transport": "sse",
183+
},
188184
}
189185
```
190186

191187
2. **Add Server Function**:
192188
```python
193-
async def get_your_mcp_tools() -> List[Callable[..., Any]]:
194-
"""Get tools from your MCP server."""
195-
return await get_mcp_tools("your_mcp_server")
189+
async def get_context7_tools() -> List[Callable[..., Any]]:
190+
"""Get Context7 documentation tools."""
191+
return await get_mcp_tools("context7")
196192
```
197193

198-
3. **Enable in Context** - Add context flag and load tools in `get_tools()` function.
194+
3. **Enable in Context** - Add context flag and load tools in `get_tools()` function:
195+
```python
196+
# In src/common/tools.py
197+
if context.enable_context7:
198+
tools.extend(await get_context7_tools())
199+
```
199200

200-
### Configure Models
201-
Our key extended method `load_chat_model` in [`src/common/utils.py`](./src/common/utils.py) uses LangChain's [`init_chat_model`](https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html#init-chat-model) as the underlying utility.
201+
> [!TIP]
202+
> **Context7 Example**: The MCP configuration already includes a commented Context7 server setup. Context7 provides up-to-date library documentation and examples - simply uncomment the configuration and add the context flag to enable it.
202203
203-
**Model String Format**: `provider:model-name` (follows LangChain's naming convention)
204+
### Model Configuration Methods
204205

205-
**Examples**:
206-
```python
207-
# OpenAI models
208-
model = "openai:gpt-4o-mini"
209-
model = "openai:gpt-4o"
206+
#### 1. Runtime Context (Recommended)
210207

211-
# Qwen models (with regional support)
212-
model = "qwen:qwen-turbo" # Default model
213-
model = "qwen:qwen-plus" # Balanced performance
214-
model = "qwen:qwq-32b-preview" # Reasoning model
215-
model = "qwen:qvq-72b-preview" # Multimodal reasoning
208+
Use the new LangGraph v0.6 context parameter to configure models at runtime:
216209

217-
# Anthropic models
218-
model = "anthropic:claude-4-sonnet"
219-
model = "anthropic:claude-3.5-haiku"
210+
```python
211+
from common.context import Context
212+
from react_agent import graph
213+
214+
# Configure model via context
215+
result = await graph.ainvoke(
216+
{"messages": [("user", "Your question here")]},
217+
context=Context(model="openai:gpt-4o-mini")
218+
)
220219
```
221220

222-
**Configuration**:
221+
#### 2. Environment Variables
222+
223+
Set the `MODEL` environment variable in your `.env` file:
224+
223225
```bash
224-
# Set via environment variable
225-
MODEL=qwen:qwen-turbo
226+
MODEL=anthropic:claude-3.5-haiku
227+
```
226228

227-
# Or in LangGraph Studio configurable settings
229+
#### 3. LangGraph Studio Assistant Configuration
230+
231+
In LangGraph Studio, configure models through [Assistant management](https://docs.langchain.com/langgraph-platform/configuration-cloud#manage-assistants). Create or update assistants with different model configurations for easy switching between setups.
232+
233+
### Supported Model Formats
234+
235+
**Model String Format**: `provider:model-name` (follows LangChain [`init_chat_model`](https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html#init-chat-model) naming convention)
236+
237+
```python
238+
# OpenAI models
239+
"openai:gpt-4o-mini"
240+
"openai:gpt-4o"
241+
242+
# Qwen models (with regional support)
243+
"qwen:qwen-turbo" # Default model
244+
"qwen:qwen-plus" # Balanced performance
245+
"qwen:qwq-32b-preview" # Reasoning model
246+
"qwen:qvq-72b-preview" # Multimodal reasoning
247+
248+
# Anthropic models
249+
"anthropic:claude-4-sonnet"
250+
"anthropic:claude-3.5-haiku"
228251
```
229252

230253
### Customize Prompts
@@ -246,8 +269,8 @@ Runtime configuration is managed in [`src/common/context.py`](./src/common/conte
246269

247270
### Development Server
248271
```bash
249-
make dev # Start LangGraph development server
250-
make dev_ui # Start with browser UI
272+
make dev # Start LangGraph development server (uv run langgraph dev --no-browser)
273+
make dev_ui # Start with LangGraph Studio Web UI in browser
251274
```
252275

253276
### Testing
@@ -279,7 +302,7 @@ The template uses a modular architecture:
279302
- **`src/react_agent/`**: Core agent graph and state management
280303
- **`src/common/`**: Shared components (context, models, tools, prompts, MCP integration)
281304
- **`tests/`**: Comprehensive test suite with fixtures and MCP integration coverage
282-
- **`langgraph.json`**: LangGraph Studio configuration
305+
- **`langgraph.json`**: LangGraph Agent basic configuration settings
283306

284307
Key components:
285308
- **`src/common/mcp.py`**: MCP client management for external documentation sources

0 commit comments

Comments
 (0)