11# Tools
22
3- By popular demand, Rigging includes a basic helper layer to provide the concept of "Tools" to a model. It's
4- debatable whether this approach (or more specifically the way we present it to narrative models) is a good idea,
5- but it's a fast way to extend the capability of your generation into arbitrary code functions that you define.
3+ Rigging supports the concept of tools through 2 implementations:
4+
5+ - ** 'API' Tools** : These are API-level tool definitions which require a support from a model provder.
6+ - ** 'Native' Tools** : These are internally defined, parsed, and handled by Rigging (the original implementation).
7+
8+ In most cases, users should opt for API tools with better provider integrations and performance.
9+
10+ Regardless of tool type, the [ ` ChatPipeline.using() ` ] [ rigging.chat.ChatPipeline.using ] method should be
11+ used to register tools for use during generation.
12+
13+ === "API Tools"
14+
15+ ```py
16+ from typing import Annotated
17+ import requests
18+ import rigging as rg
19+
20+ def get_weather(city: Annotated[str, "The city name to get weather for"]) -> str:
21+ "A tool to get the weather for a location"
22+ try:
23+ city = city.replace(" ", "+")
24+ return requests.get(f"http://wttr.in/{city}?format=2").text
25+ except:
26+ return "Failed to call the API"
27+
28+ chat = (
29+ await
30+ rg.get_generator("gpt-4o-mini")
31+ .chat("How is the weather in london?")
32+ .using(get_weather)
33+ .run()
34+ )
35+
36+ # [user]: How is the weather in london?
37+
38+ # [assistant]:
39+ # |- get_weather({"city":"London"})
40+
41+ # [tool]: 🌦 🌡️+6°C 🌬️↘35km/h
42+
43+ # [assistant]: The weather in London is currently overcast with light rain ...
44+ ```
45+
46+ === "Native Tools"
47+
48+ ```py
49+ from typing import Annotated
50+ import requests
51+ import rigging as rg
52+
53+ class WeatherTool(rg.Tool):
54+ name = "weather"
55+ description = "A tool to get the weather for a location"
56+
57+ def get_for_city(self, city: Annotated[str, "The city name to get weather for"]) -> str:
58+ try:
59+ city = city.replace(" ", "+")
60+ return requests.get(f"http://wttr.in/{city}?format=2").text
61+ except:
62+ return "Failed to call the API"
63+
64+ chat = (
65+ await
66+ rg.get_generator("gpt-4o-mini")
67+ .chat("How is the weather in london?")
68+ .using(WeatherTool())
69+ .run()
70+ )
71+ ```
72+
73+
74+ ## API Tools
75+
76+ API tools are defined as standard callables (async supported) and get wrapped in the
77+ [ ` rg.ApiTool ` ] [ rigging.tool.ApiTool ] class before being used during generation.
78+
79+ We use Pydantic to introspect the callable and extract schema information from the signature with some great benefits:
80+
81+ 1 . API-compatible schema information from any function
82+ 2 . Robust argument validation for incoming inference data
83+ 3 . Flexible type handling for BaseModels, Fields, TypedDicts, and Dataclasses
84+
85+ Just after the tool is converted, we take the function schema and add it to the
86+ [ GenerateParams.tools] [ rigging.generator.GenerateParams ] inside the ` ChatPipeline ` .
87+
88+ ``` py
89+ from typing_extensions import TypedDict
90+ from typing import Annotated
91+ from pydantic import Field
92+ import rigging as rg
693
7- ## Writing Tools
94+ class Filters (TypedDict ):
95+ city: Annotated[str | None , Field(description = " The city to filter by" )]
96+ age: int | None
897
9- Much like models, tools inherit from a base [ ` rg.Tool ` ] [ rigging.tool.Tool ] class. These subclasses are required
98+ def lookup_person (name : Annotated[str , " Full name" ], filters : Filters) -> str :
99+ " Search the database for a person"
100+ ...
101+
102+
103+ tool = rg.ApiTool(lookup_person)
104+
105+ print (tool.name)
106+ # lookup_person
107+
108+ print (tool.description)
109+ # Search the database for a person
110+
111+ print (tool.schema)
112+ # {'$defs': {'Filters': {'properties': ...}
113+ ```
114+
115+ Internally, we leverage [ ` ChatPipeline.then() ` ] [ rigging.chat.ChatPipeline.then ] to handle responses from the model and
116+ attempt to resolve tool calls before starting another generation loop. This means that when you pass the tool function
117+ into your chat pipeline will define it's order amongst other callbacks like [ ` .then() ` ] [ rigging.chat.ChatPipeline.then ]
118+ and [ ` .map() ` ] [ rigging.chat.ChatPipeline.map ]
119+
120+ ## Native Tools
121+
122+ Much like models, native tools inherit from a base [ ` rg.Tool ` ] [ rigging.tool.Tool ] class. These subclasses are required
10123to provide at least 1 function along with a name and description property to present to the LLM during generation.
11124
12125Every function you define and the parameters within are required to carry both type hints and annotations that
@@ -29,7 +142,7 @@ class WeatherTool(rg.Tool):
29142 return " Failed to call the API"
30143```
31144
32- Integrating tools into the generation process is as easy as passing an instantiation
145+ Integrating native tools into the generation process is as easy as passing an instantiation
33146of your tool class to the [ ` ChatPipeline.using() ` ] [ rigging.chat.ChatPipeline.using ] method.
34147
35148``` py
@@ -61,7 +174,7 @@ of the generation process.
61174 progresses. Whether this is a good software design decision is up to you.
62175
63176
64- ## Under the Hood
177+ ### Under the Hood
65178
66179If you are curious what is occuring "under the hood" (as you should), you can
67180print the entire conversation text and see our injected system prompt of
@@ -126,20 +239,4 @@ use a tool, please do so before you continue the conversation.
126239```
127240
128241Every tool assigned to the ` ChatPipeline ` will be processed by calling [ ` .get_description() ` ] [ rigging.tool.Tool.get_description ]
129- and a minimal tool-use prompt will be injected as, or appended to, the system message.
130-
131- !!! warning "The Curse of Complexity"
132-
133- Everything we add to the context window of a model introduces variance to it's outputs.
134- Even the way we name parameters and tools can have a large impact on whether a model
135- elects to output a tool call and how frequently or late it does so. For this reason
136- tool calling in Rigging might not be the best way to accomplish your goals.
137-
138- Consider different approaches to your problem like isolating fixed input/output
139- pairs and building a dedicated generation process around those, or pushing the
140- model to think more about selecting from a series of "actions" it should take
141- rather than "tools" it should use are part of a conversation.
142-
143- You also might consider a pipeline where incoming messages are scanned against
144- a list of possible tools, and fork the generation process with something like
145- [`.then`][rigging.chat.ChatPipeline.then] instead.
242+ and a minimal tool-use prompt will be injected as, or appended to, the system message.
0 commit comments