Skip to content

Commit 323b8db

Browse files
committed
📝 docs(readme): update documentation for LangGraph v0.6.6 integration and model configuration
- Clarify seamless integration with LangGraph Studio and update README badges - Add LangGraph v0.6.6 features highlighting context-driven configuration API changes - Provide detailed model configuration methods: runtime context, env vars, Studio assistant - Expand API key setup instructions for OpenAI, Anthropic, Qwen, and OpenAI-compatible providers - Introduce sample LangSmith execution traces with DeepWiki and web search examples - Update MCP server examples to include Context7 documentation tools with usage tips - Improve Chinese readme with corresponding updates and new configuration guidance - Refine project descriptions and badge links for both English and Chinese docs
1 parent 3153af7 commit 323b8db

File tree

3 files changed

+190
-136
lines changed

3 files changed

+190
-136
lines changed

‎CLAUDE.md‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ make dev_ui # Start LangGraph development server with UI
8282

8383
## LangGraph Studio Integration
8484

85-
This project is designed for LangGraph Studio. The `langgraph.json` config file defines:
85+
This project works seamlessly with LangGraph Studio. The `langgraph.json` config file defines:
8686
- Graph entry point: `./src/react_agent/graph.py:graph`
8787
- Environment file: `.env`
8888
- Dependencies: current directory (`.`)

‎README.md‎

Lines changed: 90 additions & 66 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
# LangGraph ReAct Agent Template
22

33
[![Version](https://img.shields.io/badge/version-v0.1.0-blue.svg)](https://github.com/webup/langgraph-up-react)
4+
[![LangGraph](https://img.shields.io/badge/LangGraph-v0.6.6-blue.svg)](https://github.com/langchain-ai/langgraph)
45
[![Build](https://github.com/webup/langgraph-up-react/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/webup/langgraph-up-react/actions/workflows/unit-tests.yml)
56
[![License](https://img.shields.io/badge/license-MIT-green.svg)](https://opensource.org/licenses/MIT)
6-
[![README EN](https://img.shields.io/badge/README-English-blue.svg)](./README.md)
77
[![README CN](https://img.shields.io/badge/README-中文-red.svg)](./README_CN.md)
88
[![DeepWiki](https://img.shields.io/badge/Powered_by-DeepWiki-blue.svg)](https://deepwiki.com/webup/langgraph-up-react)
99
[![Twitter](https://img.shields.io/twitter/follow/zhanghaili0610?style=social)](https://twitter.com/zhanghaili0610)
1010

11-
This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), designed for [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
11+
This template showcases a [ReAct agent](https://arxiv.org/abs/2210.03629) implemented using [LangGraph](https://github.com/langchain-ai/langgraph), works seamlessly with [LangGraph Studio](https://docs.langchain.com/langgraph-platform/quick-start-studio#use-langgraph-studio). ReAct agents are uncomplicated, prototypical agents that can be flexibly extended to many tools.
1212

1313
![Graph view in LangGraph studio UI](./static/studio_ui.png)
1414

@@ -35,6 +35,15 @@ The core logic, defined in `src/react_agent/graph.py`, demonstrates a flexible R
3535
- **Caching**: Optimized performance with client and tools caching
3636
- **Configurable**: Enable via environment variables or context parameters
3737

38+
### LangGraph v0.6.6 Features
39+
40+
> [!NOTE]
41+
> **New in LangGraph v0.6**: [LangGraph Context](https://docs.langchain.com/oss/python/context#context-overview) replaces the traditional `config['configurable']` pattern. Runtime context is now passed to the `context` argument of `invoke/stream`, providing a cleaner and more intuitive way to configure your agents.
42+
43+
- **Context-Driven Configuration**: Runtime context passed via `context` parameter instead of `config['configurable']`
44+
- **Simplified API**: Cleaner interface for passing runtime configuration to your agents
45+
- **Backward Compatibility**: Gradual migration path from the old configuration pattern
46+
3847
### Comprehensive Testing
3948
- **70+ Test Cases**: Unit, integration, and end-to-end testing
4049
- **MCP Integration Coverage**: Full testing of DeepWiki tool loading and execution
@@ -53,6 +62,13 @@ The ReAct agent:
5362

5463
The agent comes with web search capabilities and optional DeepWiki MCP documentation tools, but can be easily extended with custom tools to suit various use cases.
5564

65+
### Sample Execution Traces
66+
67+
See these LangSmith traces to understand how the agent works in practice:
68+
69+
- **[DeepWiki Documentation Query](https://smith.langchain.com/public/d0594549-7363-46a7-b1a2-d85b55aaa2bd/r)** - Shows agent using DeepWiki MCP tools to query GitHub repository documentation
70+
- **[Web Search Query](https://smith.langchain.com/public/6ce92fd2-c0e4-409b-9ce2-02499ae16800/r)** - Demonstrates Tavily web search integration and reasoning loop
71+
5672
## Getting Started
5773

5874
### Setup with uv (Recommended)
@@ -102,61 +118,41 @@ ENABLE_DEEPWIKI=true
102118

103119
The primary [search tool](./src/common/tools.py) uses [Tavily](https://tavily.com/). Create an API key [here](https://app.tavily.com/sign-in).
104120

105-
## Model Setup
106-
107-
The default model configuration is:
121+
## Model Configuration
108122

109-
```yaml
110-
model: qwen:qwen-turbo
111-
```
123+
The template uses `qwen:qwen-turbo` as the default model, defined in [`src/common/context.py`](./src/common/context.py). You can configure different models in three ways:
112124

113-
### OpenAI
125+
1. **Runtime Context** (recommended for programmatic usage)
126+
2. **Environment Variables**
127+
3. **LangGraph Studio Assistant Configuration**
114128

115-
To use OpenAI's chat models:
129+
### API Key Setup by Provider
116130

117-
1. Sign up for an [OpenAI API key](https://platform.openai.com/signup)
118-
2. Add it to your `.env` file:
131+
#### OpenAI
119132
```bash
120-
OPENAI_API_KEY=your-api-key
133+
OPENAI_API_KEY=your-openai-api-key
121134
```
135+
Get your API key: [OpenAI Platform](https://platform.openai.com/signup)
122136

123-
### Anthropic
124-
125-
To use Anthropic's chat models:
126-
127-
1. Sign up for an [Anthropic API key](https://console.anthropic.com/)
128-
2. Add it to your `.env` file:
137+
#### Anthropic
129138
```bash
130-
ANTHROPIC_API_KEY=your-api-key
139+
ANTHROPIC_API_KEY=your-anthropic-api-key
131140
```
132-
3. Update the model in LangGraph Studio to `anthropic:claude-3-5-sonnet-20240620`
133-
134-
### Qwen Models (Default)
141+
Get your API key: [Anthropic Console](https://console.anthropic.com/)
135142

136-
For Alibaba's Qwen models (Qwen3, QwQ-32B, etc.):
137-
138-
1. Sign up for a [DashScope API key](https://dashscope.console.aliyun.com/)
139-
2. Add it to your `.env` file:
143+
#### Qwen Models (Default)
140144
```bash
141-
DASHSCOPE_API_KEY=your-api-key
145+
DASHSCOPE_API_KEY=your-dashscope-api-key
142146
REGION=international # or 'prc' for China mainland
143147
```
144-
3. Update the model in LangGraph Studio to `qwen:qwen3-32b` or `qwen:qwen-plus`
145-
146-
### OpenAI-Compatible Providers
148+
Get your API key: [DashScope Console](https://dashscope.console.aliyun.com/)
147149

148-
For any OpenAI-compatible API (SiliconFlow, Together AI, Groq, etc.):
149-
150-
1. Get your API key from the provider
151-
2. Add to your `.env` file:
150+
#### OpenAI-Compatible Providers
152151
```bash
153-
# Example for custom OpenAI-compatible provider
154152
OPENAI_API_KEY=your-provider-api-key
155153
OPENAI_API_BASE=https://your-provider-api-base-url/v1
156154
```
157-
3. Update the model in LangGraph Studio to `openai:provider-model-name`
158-
159-
This flexible architecture allows you to use any OpenAI-compatible API by simply providing the API key and base URL.
155+
Supports SiliconFlow, Together AI, Groq, and other OpenAI-compatible APIs.
160156

161157
## How to Customize
162158

@@ -181,50 +177,78 @@ MCP_SERVERS = {
181177
"url": "https://mcp.deepwiki.com/mcp",
182178
"transport": "streamable_http",
183179
},
184-
"your_mcp_server": { # Add your MCP server
185-
"url": "https://your-mcp-server.com/mcp",
186-
"transport": "streamable_http",
187-
}
180+
# Example: Context7 for library documentation
181+
"context7": {
182+
"url": "https://mcp.context7.com/sse",
183+
"transport": "sse",
184+
},
188185
}
189186
```
190187

191188
2. **Add Server Function**:
192189
```python
193-
async def get_your_mcp_tools() -> List[Callable[..., Any]]:
194-
"""Get tools from your MCP server."""
195-
return await get_mcp_tools("your_mcp_server")
190+
async def get_context7_tools() -> List[Callable[..., Any]]:
191+
"""Get Context7 documentation tools."""
192+
return await get_mcp_tools("context7")
196193
```
197194

198-
3. **Enable in Context** - Add context flag and load tools in `get_tools()` function.
195+
3. **Enable in Context** - Add context flag and load tools in `get_tools()` function:
196+
```python
197+
# In src/common/tools.py
198+
if context.enable_context7:
199+
tools.extend(await get_context7_tools())
200+
```
201+
202+
> [!TIP]
203+
> **Context7 Example**: The MCP configuration already includes a commented Context7 server setup. Context7 provides up-to-date library documentation and examples - simply uncomment the configuration and add the context flag to enable it.
204+
205+
### Model Configuration Methods
206+
207+
#### 1. Runtime Context (Recommended)
208+
209+
Use the new LangGraph v0.6.6 context parameter to configure models at runtime:
199210

200-
### Configure Models
201-
Our key extended method `load_chat_model` in [`src/common/utils.py`](./src/common/utils.py) uses LangChain's [`init_chat_model`](https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html#init-chat-model) as the underlying utility.
211+
```python
212+
from common.context import Context
213+
from react_agent import graph
214+
215+
# Configure model via context
216+
result = await graph.ainvoke(
217+
{"messages": [("user", "Your question here")]},
218+
context=Context(model="openai:gpt-4o-mini")
219+
)
220+
```
221+
222+
#### 2. Environment Variables
223+
224+
Set the `MODEL` environment variable in your `.env` file:
225+
226+
```bash
227+
MODEL=anthropic:claude-3.5-haiku
228+
```
229+
230+
#### 3. LangGraph Studio Assistant Configuration
231+
232+
In LangGraph Studio, configure models through [Assistant management](https://docs.langchain.com/langgraph-platform/configuration-cloud#manage-assistants). Create or update assistants with different model configurations for easy switching between setups.
233+
234+
### Supported Model Formats
202235

203236
**Model String Format**: `provider:model-name` (follows LangChain's naming convention)
204237

205-
**Examples**:
206238
```python
207239
# OpenAI models
208-
model = "openai:gpt-4o-mini"
209-
model = "openai:gpt-4o"
240+
"openai:gpt-4o-mini"
241+
"openai:gpt-4o"
210242

211243
# Qwen models (with regional support)
212-
model = "qwen:qwen-turbo" # Default model
213-
model = "qwen:qwen-plus" # Balanced performance
214-
model = "qwen:qwq-32b-preview" # Reasoning model
215-
model = "qwen:qvq-72b-preview" # Multimodal reasoning
244+
"qwen:qwen-turbo" # Default model
245+
"qwen:qwen-plus" # Balanced performance
246+
"qwen:qwq-32b-preview" # Reasoning model
247+
"qwen:qvq-72b-preview" # Multimodal reasoning
216248

217249
# Anthropic models
218-
model = "anthropic:claude-4-sonnet"
219-
model = "anthropic:claude-3.5-haiku"
220-
```
221-
222-
**Configuration**:
223-
```bash
224-
# Set via environment variable
225-
MODEL=qwen:qwen-turbo
226-
227-
# Or in LangGraph Studio configurable settings
250+
"anthropic:claude-4-sonnet"
251+
"anthropic:claude-3.5-haiku"
228252
```
229253

230254
### Customize Prompts

0 commit comments

Comments
 (0)