|
| 1 | +# GenAI Toolbox SDK |
| 2 | + |
| 3 | +This SDK allows you to seamlessly integrate the functionalities of |
| 4 | +[Toolbox](https://github.com/googleapis/genai-toolbox) into your LLM |
| 5 | +applications, enabling advanced orchestration and interaction with GenAI models. |
| 6 | + |
| 7 | +> [!NOTE] Currently the SDK only supports |
| 8 | +> [LangChain](https://python.langchain.com/docs/introduction/). |
| 9 | +
|
| 10 | +<!-- TOC --> |
| 11 | + |
| 12 | +- [GenAI Toolbox SDK](#genai-toolbox-sdk) |
| 13 | + - [Installation](#installation) |
| 14 | + - [Usage](#usage) |
| 15 | + - [Load a toolset](#load-a-toolset) |
| 16 | + - [Use with LangChain](#use-with-langchain) |
| 17 | + - [Use with LangGraph](#use-with-langgraph) |
| 18 | + - [Represent Tools as Nodes](#represent-tools-as-nodes) |
| 19 | + - [Connect Tools with LLM](#connect-tools-with-llm) |
| 20 | + - [Manual usage](#manual-usage) |
| 21 | + |
| 22 | +<!-- /TOC --> |
| 23 | + |
| 24 | + |
| 25 | +## Installation |
| 26 | + |
| 27 | +You can install the Toolbox SDK for LangChain using `pip`. |
| 28 | + |
| 29 | +```bash |
| 30 | +pip install toolbox-langchain-sdk |
| 31 | +``` |
| 32 | + |
| 33 | +> [!IMPORTANT] This SDK is not yet available on PyPI. For now, install it from |
| 34 | +> source by following these [installation instructions](#development) (up to |
| 35 | +> step 3). |
| 36 | +
|
| 37 | +## Usage |
| 38 | + |
| 39 | +Import and initialize the toolbox client. |
| 40 | + |
| 41 | +```python |
| 42 | +from toolbox_langchain_sdk import ToolboxClient |
| 43 | + |
| 44 | +# Replace with your Toolbox service's URL |
| 45 | +toolbox = ToolboxClient("http://localhost:5000") |
| 46 | +``` |
| 47 | + |
| 48 | +## Load a toolset |
| 49 | + |
| 50 | +You can load a toolsets, that are collections of related tools. |
| 51 | + |
| 52 | +```python |
| 53 | +# Load all tools |
| 54 | +tools = await toolbox.load_toolset() |
| 55 | + |
| 56 | +# Load a specific toolset |
| 57 | +tools = await toolbox.load_toolset("my-toolset") |
| 58 | +``` |
| 59 | + |
| 60 | +## Use with LangChain |
| 61 | + |
| 62 | +LangChain's agents can dynamically choose and execute tools based on the user |
| 63 | +input. The user can include the tools loaded from the Toolbox SDK in the agent's |
| 64 | +toolkit. |
| 65 | + |
| 66 | +```python |
| 67 | +from langchain_google_vertexai import ChatVertexAI |
| 68 | +from langchain.agents import initialize_agent |
| 69 | + |
| 70 | +model = ChatVertexAI() |
| 71 | + |
| 72 | +# Initialize agent with tools |
| 73 | +agent = initialize_agent(tools, model) |
| 74 | + |
| 75 | +# Run the agent |
| 76 | +agent.run("Do something with the tools") |
| 77 | +``` |
| 78 | + |
| 79 | +## Use with LangGraph |
| 80 | + |
| 81 | +The Toolbox SDK can be seamlessly integrated with LangGraph to enable the use of |
| 82 | +Toolbox service tools within a graph-based workflow. Using this SDK, we can |
| 83 | +follow the [official guide](https://langchain-ai.github.io/langgraph/) with |
| 84 | +minimal changes. |
| 85 | + |
| 86 | +### Represent Tools as Nodes |
| 87 | +Each tool generated by the SDK can be represented as a LangGraph node. The |
| 88 | +node's functionality would encapsulate the execution of the corresponding tool. |
| 89 | + |
| 90 | +```python |
| 91 | +from toolbox_sdk import ToolboxClient |
| 92 | +from langgraph.graph import StateGraph, MessagesState |
| 93 | +from langgraph.prebuilt import ToolNode |
| 94 | + |
| 95 | +# Define the function that calls the model |
| 96 | +def call_model(state: MessagesState): |
| 97 | + messages = state['messages'] |
| 98 | + response = model.invoke(messages) |
| 99 | + # We return a list, because this will get added to the existing list |
| 100 | + return {"messages": [response]} |
| 101 | + |
| 102 | +model = ChatVertexAI() |
| 103 | +builder = StateGraph() |
| 104 | +tool_node = ToolNode(tools) |
| 105 | + |
| 106 | +builder.add_node("agent", call_model) |
| 107 | +builder.add_node("tools", tool_node) |
| 108 | +``` |
| 109 | + |
| 110 | +### Connect Tools with LLM |
| 111 | + |
| 112 | +Now we can connect the tool nodes with LLM nodes. The LLM can decide which tool |
| 113 | +to use based on the user input or the context of the conversation. The output |
| 114 | +from a tool can then be fed back into the LLM for further processing or |
| 115 | +decision-making. |
| 116 | + |
| 117 | +```python |
| 118 | +from langgraph.graph import END, START |
| 119 | + |
| 120 | +# Define the function that determines whether to continue or not |
| 121 | +def should_continue(state: MessagesState) -> Literal["tools", END]: |
| 122 | + messages = state['messages'] |
| 123 | + last_message = messages[-1] |
| 124 | + # If the LLM makes a tool call, then we route to the "tools" node |
| 125 | + if last_message.tool_calls: |
| 126 | + return "tools" |
| 127 | + # Otherwise, we stop (reply to the user) |
| 128 | + return END |
| 129 | + |
| 130 | +builder.add_edge(START, "agent") |
| 131 | +builder.add_conditional_edges( |
| 132 | + "agent", |
| 133 | + should_continue, |
| 134 | +) |
| 135 | +builder.add_edge("tools", 'agent') |
| 136 | + |
| 137 | +graph = builder.compile() |
| 138 | +``` |
| 139 | + |
| 140 | +## Manual usage |
| 141 | + |
| 142 | +You can also execute a tool manually using the `arun` method. |
| 143 | + |
| 144 | +```python |
| 145 | +result = await tools[0].arun({ "name": "Alice", "age": 30 }) |
| 146 | +``` |
0 commit comments