Skip to content

Commit 8523717

Browse files
authored
Merge pull request #103 from attogram/docs-and-format-fix
Fix: Standardize usage variables and update README
2 parents b536714 + cf6c39b commit 8523717

File tree

2 files changed

+156
-15
lines changed

2 files changed

+156
-15
lines changed

README.md

Lines changed: 133 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,9 @@ Run LLM prompts straight from your shell. Command line access to the ghost in th
2929
[Functions](#functions) :
3030
[Api](#api-functions) -
3131
[Helper](#helper-functions) -
32-
[Generate](#generate-functions) -
33-
[Chat](#chat-functions) -
32+
[Generate](#generate-functions) -
33+
[Chat](#chat-functions) -
34+
[Tools](#tools-functions) -
3435
[Model](#model-functions) -
3536
[Ollama](#ollama-functions) -
3637
[Lib](#lib-functions) -
@@ -173,6 +174,125 @@ Examples:
173174
OLLAMA_LIB_DEBUG=1 ollama_generate gpt-oss:20b "Three words about debugging"
174175
```
175176

177+
### Howto Use Tools
178+
179+
The Tool System allows you to define custom tools that a model can use to perform actions.
180+
181+
**Step 1: Add a Tool**
182+
183+
First, you need to define a tool. A tool consists of a name, a command to execute, and a JSON definition that describes the tool to the model.
184+
185+
For example, let's create a tool that gets the current weather:
186+
187+
```bash
188+
# The command for our tool will be a function that takes a JSON object as input
189+
weather_tool() {
190+
local location
191+
location="$(printf '%s' "$1" | jq -r '.location')"
192+
# In a real scenario, you would call a weather API here
193+
printf '{"temperature": "72F", "conditions": "Sunny"}'
194+
}
195+
196+
# The JSON definition for the model
197+
weather_definition='{
198+
"name": "get_weather",
199+
"description": "Get the current weather in a given location",
200+
"parameters": {
201+
"type": "object",
202+
"properties": {
203+
"location": {
204+
"type": "string",
205+
"description": "The city and state, e.g. San Francisco, CA"
206+
}
207+
},
208+
"required": ["location"]
209+
}
210+
}'
211+
212+
# Add the tool
213+
ollama_tools_add "get_weather" "weather_tool" "$weather_definition"
214+
```
215+
216+
**Step 2: View Tools**
217+
218+
You can see all registered tools with `ollama_tools`:
219+
220+
```bash
221+
ollama_tools
222+
# get_weather weather_tool
223+
```
224+
225+
**Step 3: Send a Tool Request**
226+
227+
You can send a request to a model that supports tool calling. You can use either `ollama_generate` or `ollama_chat`.
228+
229+
Using `ollama_chat`:
230+
```bash
231+
ollama_messages_add "user" "What is the weather in San Francisco?"
232+
response="$(ollama_chat "gpt-4-turbo")"
233+
```
234+
235+
**Step 4: Consume the Tool Response**
236+
237+
If the model decides to use a tool, the response will contain a `tool_calls` section. You can check for this with `ollama_tools_is_call`.
238+
239+
```bash
240+
if ollama_tools_is_call "$response"; then
241+
echo "Tool call detected!"
242+
fi
243+
```
244+
245+
The response will look something like this:
246+
```json
247+
{
248+
"model": "gpt-4-turbo",
249+
"created_at": "...",
250+
"message": {
251+
"role": "assistant",
252+
"content": "",
253+
"tool_calls": [
254+
{
255+
"id": "call_1234",
256+
"type": "function",
257+
"function": {
258+
"name": "get_weather",
259+
"arguments": "{\n \"location\": \"San Francisco, CA\"\n}"
260+
}
261+
}
262+
]
263+
},
264+
...
265+
}
266+
```
267+
268+
**Step 5: Run the Tool**
269+
270+
Now you need to extract the tool call information and run the tool.
271+
272+
```bash
273+
tool_name="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].function.name')"
274+
tool_args="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].function.arguments')"
275+
tool_call_id="$(printf '%s' "$response" | jq -r '.message.tool_calls[0].id')"
276+
277+
tool_result="$(ollama_tools_run "$tool_name" "$tool_args")"
278+
```
279+
280+
**Step 6: Add Tool Response to Message List (for chat)**
281+
282+
Finally, if you are in a chat session, you need to add the tool's output back into the message list so the model can use it to generate a user-facing response.
283+
284+
```bash
285+
# Create a JSON object with the tool_call_id and the result
286+
tool_response_json="$(jq -c -n --arg tool_call_id "$tool_call_id" --arg result "$tool_result" '{tool_call_id: $tool_call_id, result: $result}')"
287+
288+
# Add the tool response to the messages
289+
ollama_messages_add "tool" "$tool_response_json"
290+
291+
# Now, call the chat again to get the final response
292+
final_response="$(ollama_chat "gpt-4-turbo")"
293+
echo "$final_response"
294+
```
295+
176296
## Demos
177297

178298
See the **[demos](demos)** directory for all demo scripts
@@ -245,6 +365,17 @@ To run all demos and save output to Markdown files: [demos/run.demos.sh](demos/r
245365
| `ollama_messages_count`<br />`omco` | Count of messages in chat context | `ollama_messages_count` | number of messages to `stdout` | `0`/`1` |
246366
| `ollama_messages_clear`<br />`omc` | Clear all messages in chat context | `ollama_messages_clear` | none | `0`/`1` |
247367

368+
### Tools Functions
369+
370+
| Function<br />Alias | About | Usage | Output | Return |
371+
|----------------------------------|-------------------------------------|--------------------------------------------------------------------|----------------------------|---------------------------------------------|
372+
| `ollama_tools_add`<br />`ota` | Add a tool | `ollama_tools_add "name" "command" "json_definition"` | none | `0`/`1` |
373+
| `ollama_tools`<br />`oto` | View all tools | `ollama_tools` | list of tools to `stdout` | `0`/`1` |
374+
| `ollama_tools_count`<br />`otco` | Get count of tools | `ollama_tools_count` | number of tools to `stdout`| `0`/`1` |
375+
| `ollama_tools_clear`<br />`otc` | Remove all tools | `ollama_tools_clear` | none | `0`/`1` |
376+
| `ollama_tools_is_call`<br />`otic`| Does the response have a tool call? | `ollama_tools_is_call "json_response"` | none | `0` if it has a tool call, `1` otherwise |
377+
| `ollama_tools_run`<br />`otr` | Run a tool | `ollama_tools_run "tool_name" "arguments_json"` | result of tool to `stdout` | `0`/`1` |
378+
248379
### Model Functions
249380

250381
| Function<br />Alias | About | Usage | Output | Return |

ollama_bash_lib.sh

Lines changed: 23 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1746,14 +1746,16 @@ Usage: ollama_app_version_json"
17461746
# Requires: ollama
17471747
# Returns: 0 on success, 1 on error
17481748
ollama_app_version_cli() {
1749-
local usage="Usage: ollama_app_version_cli"
1749+
local usage
1750+
usage="ollama_app_version_cli - Ollama App version, CLI version
1751+
Usage: ollama_app_version_cli"
17501752
if [[ $# -gt 0 ]]; then
17511753
if [[ $# -eq 1 && ("$1" == "-h" || "$1" == "--help") ]]; then
1752-
echo "$usage"
1754+
printf "%s\n" "$usage"
17531755
return 0
17541756
else
17551757
_error "ollama_app_version_cli: Unknown argument(s): $*"
1756-
_error "$usage"
1758+
printf "%s\n" "$usage" >&2
17571759
return 1
17581760
fi
17591761
fi
@@ -1776,10 +1778,12 @@ ollama_app_version_cli() {
17761778
# Requires: none
17771779
# Returns: 0 on success, 1 on error
17781780
ollama_thinking() {
1779-
local usage="Usage: ollama_thinking [on|off|hide]"
1781+
local usage
1782+
usage="ollama_thinking - Ollama Thinking Mode
1783+
Usage: ollama_thinking [on|off|hide]"
17801784
for arg in "$@"; do
17811785
if [[ "$arg" == "-h" || "$arg" == "--help" ]]; then
1782-
echo "$usage"
1786+
printf "%s\n" "$usage"
17831787
return 0
17841788
fi
17851789
done
@@ -1813,14 +1817,16 @@ ollama_thinking() {
18131817
# Requires: compgen (for function list)
18141818
# Returns: 0 on success, 1 on missing compgen or column
18151819
ollama_lib_about() {
1816-
local usage="Usage: ollama_lib_about"
1820+
local usage
1821+
usage="ollama_lib_about - About Ollama Bash Lib
1822+
Usage: ollama_lib_about"
18171823
if [[ $# -gt 0 ]]; then
18181824
if [[ $# -eq 1 && ("$1" == "-h" || "$1" == "--help") ]]; then
1819-
echo "$usage"
1825+
printf "%s\n" "$usage"
18201826
return 0
18211827
else
18221828
_error "ollama_lib_about: Unknown argument(s): $*"
1823-
_error "$usage"
1829+
printf "%s\n" "$usage" >&2
18241830
return 1
18251831
fi
18261832
fi
@@ -1866,14 +1872,16 @@ ollama_lib_about() {
18661872
# Requires: none
18671873
# Returns: 0
18681874
ollama_lib_version() {
1869-
local usage="Usage: ollama_lib_version"
1875+
local usage
1876+
usage="ollama_lib_version - Ollama Bash Lib version
1877+
Usage: ollama_lib_version"
18701878
if [[ $# -gt 0 ]]; then
18711879
if [[ $# -eq 1 && ("$1" == "-h" || "$1" == "--help") ]]; then
1872-
echo "$usage"
1880+
printf "%s\n" "$usage"
18731881
return 0
18741882
else
18751883
_error "ollama_lib_version: Unknown argument(s): $*"
1876-
_error "$usage"
1884+
printf "%s\n" "$usage" >&2
18771885
return 1
18781886
fi
18791887
fi
@@ -1892,10 +1900,12 @@ ollama_lib_version() {
18921900
# Requires: none
18931901
# Returns: 0 on success, 1 or higher on error
18941902
ollama_eval() {
1895-
local usage="Usage: ollama_eval \"task\" \"[model]\""
1903+
local usage
1904+
usage="ollama_eval - Command Line Eval
1905+
Usage: ollama_eval \"task\" \"[model]\""
18961906
for arg in "$@"; do
18971907
if [[ "$arg" == "-h" || "$arg" == "--help" ]]; then
1898-
echo "$usage"
1908+
printf "%s\n" "$usage"
18991909
return 0
19001910
fi
19011911
done

0 commit comments

Comments
 (0)