Skip to content

Commit a670160

Browse files
committed
fix llamaindex readme
1 parent 40eaa26 commit a670160

File tree

1 file changed

+68
-86
lines changed

1 file changed

+68
-86
lines changed

packages/toolbox-langchain/README.md

Lines changed: 68 additions & 86 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
![MCP Toolbox Logo](https://raw.githubusercontent.com/googleapis/genai-toolbox/main/logo.png)
2-
# MCP Toolbox LangChain SDK
2+
# MCP Toolbox LlamaIndex SDK
33

44
This SDK allows you to seamlessly integrate the functionalities of
5-
[Toolbox](https://github.com/googleapis/genai-toolbox) into your LangChain LLM
5+
[Toolbox](https://github.com/googleapis/genai-toolbox) into your LlamaIndex LLM
66
applications, enabling advanced orchestration and interaction with GenAI models.
77

88
<!-- TOC ignore:true -->
@@ -15,10 +15,7 @@ applications, enabling advanced orchestration and interaction with GenAI models.
1515
- [Loading Tools](#loading-tools)
1616
- [Load a toolset](#load-a-toolset)
1717
- [Load a single tool](#load-a-single-tool)
18-
- [Use with LangChain](#use-with-langchain)
19-
- [Use with LangGraph](#use-with-langgraph)
20-
- [Represent Tools as Nodes](#represent-tools-as-nodes)
21-
- [Connect Tools with LLM](#connect-tools-with-llm)
18+
- [Use with LlamaIndex](#use-with-llamaindex)
2219
- [Manual usage](#manual-usage)
2320
- [Authenticating Tools](#authenticating-tools)
2421
- [Supported Authentication Mechanisms](#supported-authentication-mechanisms)
@@ -38,41 +35,48 @@ applications, enabling advanced orchestration and interaction with GenAI models.
3835
## Installation
3936

4037
```bash
41-
pip install toolbox-langchain
38+
pip install toolbox-llamaindex
4239
```
4340

4441
## Quickstart
4542

4643
Here's a minimal example to get you started using
47-
[LangGraph](https://langchain-ai.github.io/langgraph/reference/prebuilt/#langgraph.prebuilt.chat_agent_executor.create_react_agent):
44+
# TODO: add link
45+
[LlamaIndex]():
4846

4947
```py
50-
from toolbox_langchain import ToolboxClient
51-
from langchain_google_vertexai import ChatVertexAI
52-
from langgraph.prebuilt import create_react_agent
48+
import asyncio
5349

54-
toolbox = ToolboxClient("http://127.0.0.1:5000")
55-
tools = toolbox.load_toolset()
50+
from llama_index.llms.google_genai import GoogleGenAI
51+
from llama_index.core.agent.workflow import AgentWorkflow
52+
53+
from toolbox_llamaindex import ToolboxClient
5654

57-
model = ChatVertexAI(model="gemini-1.5-pro-002")
58-
agent = create_react_agent(model, tools)
55+
async def run_agent():
56+
toolbox = ToolboxClient("http://127.0.0.1:5000")
57+
tools = toolbox.load_toolset()
5958

60-
prompt = "How's the weather today?"
59+
vertex_model = GoogleGenAI(
60+
model="gemini-1.5-pro",
61+
vertexai_config={"project": "project-id", "location": "us-central1"},
62+
)
63+
agent = AgentWorkflow.from_tools_or_functions(
64+
tools,
65+
llm=vertex_model,
66+
system_prompt="You are a helpful assistant.",
67+
)
68+
response = await agent.run(user_msg="Get some response from the agent.")
69+
print(response)
6170

62-
for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
63-
message = s["messages"][-1]
64-
if isinstance(message, tuple):
65-
print(message)
66-
else:
67-
message.pretty_print()
71+
asyncio.run(run_agent())
6872
```
6973

7074
## Usage
7175

7276
Import and initialize the toolbox client.
7377

7478
```py
75-
from toolbox_langchain import ToolboxClient
79+
from toolbox_llamaindex import ToolboxClient
7680

7781
# Replace with your Toolbox service's URL
7882
toolbox = ToolboxClient("http://127.0.0.1:5000")
@@ -102,85 +106,63 @@ tool = toolbox.load_tool("my-tool")
102106
Loading individual tools gives you finer-grained control over which tools are
103107
available to your LLM agent.
104108

105-
## Use with LangChain
109+
## Use with LlamaIndex
106110

107111
LangChain's agents can dynamically choose and execute tools based on the user
108112
input. Include tools loaded from the Toolbox SDK in the agent's toolkit:
109113

110114
```py
111-
from langchain_google_vertexai import ChatVertexAI
115+
from llama_index.llms.google_genai import GoogleGenAI
116+
from llama_index.core.agent.workflow import AgentWorkflow
112117

113-
model = ChatVertexAI(model="gemini-1.5-pro-002")
118+
vertex_model = GoogleGenAI(
119+
model="gemini-1.5-pro",
120+
vertexai_config={"project": "project-id", "location": "us-central1"},
121+
)
114122

115123
# Initialize agent with tools
116-
agent = model.bind_tools(tools)
117-
118-
# Run the agent
119-
result = agent.invoke("Do something with the tools")
120-
```
121-
122-
## Use with LangGraph
123-
124-
Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a
125-
graph-based workflow. Follow the [official
126-
guide](https://langchain-ai.github.io/langgraph/) with minimal changes.
127-
128-
### Represent Tools as Nodes
129-
130-
Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:
131-
132-
```py
133-
from toolbox_langchain import ToolboxClient
134-
from langgraph.graph import StateGraph, MessagesState
135-
from langgraph.prebuilt import ToolNode
136-
137-
# Define the function that calls the model
138-
def call_model(state: MessagesState):
139-
messages = state['messages']
140-
response = model.invoke(messages)
141-
return {"messages": [response]} # Return a list to add to existing messages
142-
143-
model = ChatVertexAI(model="gemini-1.5-pro-002")
144-
builder = StateGraph(MessagesState)
145-
tool_node = ToolNode(tools)
146-
147-
builder.add_node("agent", call_model)
148-
builder.add_node("tools", tool_node)
124+
agent = AgentWorkflow.from_tools_or_functions(
125+
tools,
126+
llm=vertex_model,
127+
system_prompt="You are a helpful assistant.",
128+
)
129+
130+
# Query the agent
131+
response = await agent.run(user_msg="Get some response from the agent.")
132+
print(response)
149133
```
150134

151-
### Connect Tools with LLM
135+
### Maintain state
152136

153-
Connect tool nodes with LLM nodes. The LLM decides which tool to use based on
154-
input or context. Tool output can be fed back into the LLM:
137+
To maintain state for the agent, add context as follows:
155138

156139
```py
157-
from typing import Literal
158-
from langgraph.graph import END, START
159-
from langchain_core.messages import HumanMessage
160-
161-
# Define the function that determines whether to continue or not
162-
def should_continue(state: MessagesState) -> Literal["tools", END]:
163-
messages = state['messages']
164-
last_message = messages[-1]
165-
if last_message.tool_calls:
166-
return "tools" # Route to "tools" node if LLM makes a tool call
167-
return END # Otherwise, stop
168-
169-
builder.add_edge(START, "agent")
170-
builder.add_conditional_edges("agent", should_continue)
171-
builder.add_edge("tools", 'agent')
172-
173-
graph = builder.compile()
174-
175-
graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})
140+
from llama_index.core.agent.workflow import AgentWorkflow
141+
from llama_index.core.workflow import Context
142+
from llama_index.llms.google_genai import GoogleGenAI
143+
144+
vertex_model = GoogleGenAI(
145+
model="gemini-1.5-pro",
146+
vertexai_config={"project": "twisha-dev", "location": "us-central1"},
147+
)
148+
agent = AgentWorkflow.from_tools_or_functions(
149+
tools,
150+
llm=vertex_model,
151+
system_prompt="You are a helpful assistant",
152+
)
153+
154+
# Save memory in agent context
155+
ctx = Context(agent)
156+
response = await agent.run(user_msg="Give me some response.", ctx=ctx)
157+
print(response)
176158
```
177159

178160
## Manual usage
179161

180-
Execute a tool manually using the `invoke` method:
162+
Execute a tool manually using the `call` method:
181163

182164
```py
183-
result = tools[0].invoke({"name": "Alice", "age": 30})
165+
result = tools[0].call({"name": "Alice", "age": 30})
184166
```
185167

186168
This is useful for testing tools or when you need precise control over tool
@@ -250,7 +232,7 @@ auth_tools = toolbox.load_toolset(auth_tokens={"my_auth": get_auth_token})
250232

251233
```py
252234
import asyncio
253-
from toolbox_langchain import ToolboxClient
235+
from toolbox_llamaindex import ToolboxClient
254236

255237
async def get_auth_token():
256238
# ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
@@ -261,7 +243,7 @@ toolbox = ToolboxClient("http://127.0.0.1:5000")
261243
tool = toolbox.load_tool("my-tool")
262244

263245
auth_tool = tool.add_auth_token("my_auth", get_auth_token)
264-
result = auth_tool.invoke({"input": "some input"})
246+
result = auth_tool.call({"input": "some input"})
265247
print(result)
266248
```
267249

@@ -329,7 +311,7 @@ use the asynchronous interfaces of the `ToolboxClient`.
329311
330312
```py
331313
import asyncio
332-
from toolbox_langchain import ToolboxClient
314+
from toolbox_llamaindex import ToolboxClient
333315

334316
async def main():
335317
toolbox = ToolboxClient("http://127.0.0.1:5000")

0 commit comments

Comments
 (0)