Skip to content

Commit fef6eb8

Browse files
committed
v0.46.0 - mv tools to branch todo-tool-usage
1 parent c6556b8 commit fef6eb8

File tree

6 files changed

+35
-793
lines changed

6 files changed

+35
-793
lines changed

README.md

Lines changed: 7 additions & 137 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
A Bash Library to interact with [Ollama](https://github.com/ollama/ollama)
44

5-
Run LLM prompts straight from your shell. Command line access to the ghost in the machine.
5+
Run LLM prompts straight from your shell, and more
66

77
[▶️ Get Started in 30 seconds](#quickstart)[💬 Join Discord][discord-invite]
88

@@ -23,7 +23,6 @@ Run LLM prompts straight from your shell. Command line access to the ghost in th
2323
[Howto](#howto) :
2424
[Get Tech Support](#howto-get-technical-support) -
2525
[Chat](#howto-chat) -
26-
[Use Tools](#howto-use-tools) -
2726
[Use Turbo Mode](#howto-use-ollama-turbo-mode) -
2827
[Debug](#howto-debug)
2928

@@ -32,7 +31,6 @@ Run LLM prompts straight from your shell. Command line access to the ghost in th
3231
[Helper](#helper-functions) -
3332
[Generate](#generate-functions) -
3433
[Chat](#chat-functions) -
35-
[Tools](#tool-functions) -
3634
[Model](#model-functions) -
3735
[Ollama](#ollama-functions) -
3836
[Lib](#lib-functions) -
@@ -60,12 +58,12 @@ ollama_<TAB>
6058
# ollama_app_turbo ollama_lib_version ollama_show
6159
# ollama_app_vars ollama_list ollama_show_json
6260
# ollama_app_version ollama_list_array ollama_thinking
63-
# ollama_app_version_cli ollama_list_json ollama_tools
64-
# ollama_app_version_json ollama_messages ollama_tools_add
65-
# ollama_chat ollama_messages_add ollama_tools_clear
66-
# ollama_chat_json ollama_messages_clear ollama_tools_count
67-
# ollama_chat_stream ollama_messages_count ollama_tools_is_call
68-
# ollama_chat_stream_json ollama_messages_last ollama_tools_run
61+
# ollama_app_version_cli ollama_list_json
62+
# ollama_app_version_json ollama_messages
63+
# ollama_chat ollama_messages_add
64+
# ollama_chat_json ollama_messages_clear
65+
# ollama_chat_stream ollama_messages_count
66+
# ollama_chat_stream_json ollama_messages_last
6967
# ollama_generate ollama_messages_last_json
7068
```
7169

@@ -172,124 +170,6 @@ Examples:
172170
OLLAMA_LIB_DEBUG=1 ollama_generate gpt-oss:20b "Three words about debugging"
173171
```
174172

175-
### Howto Use Tools
176-
177-
The Tool System allows you to define custom tools that a model can use to perform actions.
178-
179-
**Step 1: Add a Tool**
180-
181-
First, you need to define a tool. A tool consists of a name, a command to execute, and a JSON definition that describes the tool to the model.
182-
183-
For example, let's create a tool that gets the current weather:
184-
185-
```bash
186-
# The command for our tool will be a function that takes a JSON object as input
187-
weather_tool() {
188-
local location
189-
location="$(printf '%s' "$1" | jq -r '.location')"
190-
# In a real scenario, you would call a weather API here
191-
printf '{"temperature": "72F", "conditions": "Sunny"}'
192-
}
193-
194-
# The JSON definition for the model
195-
weather_definition='{
196-
"name": "get_weather",
197-
"description": "Get the current weather in a given location",
198-
"parameters": {
199-
"type": "object",
200-
"properties": {
201-
"location": {
202-
"type": "string",
203-
"description": "The city and state, e.g. San Francisco, CA"
204-
}
205-
},
206-
"required": ["location"]
207-
}
208-
}'
209-
210-
# Add the tool
211-
ollama_tools_add "get_weather" "weather_tool" "$weather_definition"
212-
```
213-
214-
**Step 2: View Tools**
215-
216-
You can see all registered tools with `ollama_tools`:
217-
218-
```bash
219-
ollama_tools
220-
# get_weather weather_tool
221-
```
222-
223-
**Step 3: Send a Tool Request**
224-
225-
You can send a request to a model that supports tool calling. You can use either `ollama_generate` or `ollama_chat`.
226-
227-
Using `ollama_chat`:
228-
```bash
229-
ollama_messages_add "user" "What is the weather in San Francisco?"
230-
response="$(ollama_chat "gpt-4-turbo")"
231-
```
232-
233-
**Step 4: Consume the Tool Response**
234-
235-
If the model decides to use a tool, the response will contain a `tool_calls` section. You can check for this with `ollama_tools_is_call`.
236-
237-
```bash
238-
if ollama_tools_is_call "$response"; then
239-
echo "Tool call detected!"
240-
fi
241-
```
242-
243-
The response will look something like this:
244-
```json
245-
{
246-
"model": "gpt-4-turbo",
247-
"created_at": "...",
248-
"message": {
249-
"role": "assistant",
250-
"content": "",
251-
"tool_calls": [
252-
{
253-
"id": "call_1234",
254-
"type": "function",
255-
"function": {
256-
"name": "get_weather",
257-
"arguments": "{\n \"location\": \"San Francisco, CA\"\n}"
258-
}
259-
}
260-
]
261-
}
262-
}
263-
```
264-
265-
**Step 5: Run the Tool**
266-
267-
Now you need to extract the tool call information and run the tool.
268-
269-
```bash
270-
tool_name="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].function.name')"
271-
tool_args="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].function.arguments')"
272-
tool_call_id="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].id')"
273-
274-
tool_result="$(ollama_tools_run "$tool_name" "$tool_args")"
275-
```
276-
277-
**Step 6: Add Tool Response to Message List (for chat)**
278-
279-
Finally, if you are in a chat session, you need to add the tool's output back into the message list so the model can use it to generate a user-facing response.
280-
281-
```bash
282-
# Create a JSON object with the tool_call_id and the result
283-
tool_response_json="$(jq -c -n --arg tool_call_id "$tool_call_id" --arg result "$tool_result" '{tool_call_id: $tool_call_id, result: $result}')"
284-
285-
# Add the tool response to the messages
286-
ollama_messages_add "tool" "$tool_response_json"
287-
288-
# Now, call the chat again to get the final response
289-
final_response="$(ollama_chat "gpt-4-turbo")"
290-
echo "$final_response"
291-
```
292-
293173
## Demos
294174

295175
See the **[demos](demos)** directory for all demo scripts
@@ -356,16 +236,6 @@ To run all demos and save output to Markdown files: [demos/run.demos.sh](demos/r
356236
| `ollama_messages_count`<br />`omco` | Count of messages in chat context | `ollama_messages_count` | number of messages to `stdout` | `0`/`1` |
357237
| `ollama_messages_clear`<br />`omc` | Clear all messages in chat context | `ollama_messages_clear` | none | `0`/`1` |
358238

359-
### Tool Functions
360-
361-
| Function<br />Alias | About | Usage | Output | Return |
362-
|----------------------------------|-------------------------------------|--------------------------------------------------------------------|----------------------------|---------------------------------------------|
363-
| `ollama_tools_add`<br />`ota` | Add a tool | `ollama_tools_add "name" "command" "json_definition"` | none | `0`/`1` |
364-
| `ollama_tools`<br />`oto` | View all tools | `ollama_tools` | list of tools to `stdout` | `0`/`1` |
365-
| `ollama_tools_count`<br />`otco` | Get count of tools | `ollama_tools_count` | number of tools to `stdout`| `0`/`1` |
366-
| `ollama_tools_clear`<br />`otc` | Remove all tools | `ollama_tools_clear` | none | `0`/`1` |
367-
| `ollama_tools_is_call`<br />`otic`| Does the response have a tool call? | `ollama_tools_is_call "json_response"` | none | `0` if it has a tool call, `1` otherwise |
368-
| `ollama_tools_run`<br />`otr` | Run a tool | `ollama_tools_run "tool_name" "arguments_json"` | result of tool to `stdout` | `0`/`1` |
369239

370240
### Model Functions
371241

demos/ollama_tools_simple.md

Lines changed: 0 additions & 26 deletions
This file was deleted.

demos/ollama_tools_simple.sh

Lines changed: 0 additions & 96 deletions
This file was deleted.

demos/ollama_tools_with_args.md

Lines changed: 0 additions & 26 deletions
This file was deleted.

0 commit comments

Comments
 (0)