You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enhanced Ollama integration with native tool calling support
- Implemented OllamaWrapper class for ChatOpenAI interface compatibility
- Added native tool calling support using Ollama's built-in capabilities
- Maintained backward compatibility with existing agent code
- Updated pyproject.toml to include ollama dependency
- Enhanced documentation with Ollama setup and usage instructions
- Updated CLI help text to include Ollama model option
Addresses GitHub PR Integuru-AI#12 comment about leveraging Ollama's built-in tool calling
Copy file name to clipboardExpand all lines: README.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,9 @@ Let's assume we want to download utility bills:
40
40
41
41
## Setup
42
42
43
-
1. Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) and add the `OPENAI_API_KEY` environment variable. (We recommend using an account with access to models that are at least as capable as OpenAI o1-mini. Models on par with OpenAI o1-preview are ideal.)
43
+
1.**For OpenAI models**: Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) and add the `OPENAI_API_KEY` environment variable. (We recommend using an account with access to models that are at least as capable as OpenAI o1-mini. Models on par with OpenAI o1-preview are ideal.)
44
+
45
+
**For Ollama models**: Install and run [Ollama](https://ollama.com/download), then pull a compatible model (e.g., `ollama pull llama3.1`).
44
46
2. Install Python requirements via poetry:
45
47
```
46
48
poetry install
@@ -60,11 +62,13 @@ Let's assume we want to download utility bills:
60
62
Log into your platform and perform the desired action (such as downloading a utility bill).
61
63
6. Run Integuru:
62
64
```
63
-
poetry run integuru --prompt "download utility bills" --model <gpt-4o|o3-mini|o1|o1-mini>
65
+
poetry run integuru --prompt "download utility bills" --model <gpt-4o|o3-mini|o1|o1-mini|ollama>
64
66
```
65
67
You can also run it via Jupyter Notebook `main.ipynb`
66
68
67
-
**Recommended to use gpt-4o as the model for graph generation as it supports function calling. Integuru will automatically switch to o1-preview for code generation if available in the user's OpenAI account.**
69
+
**Recommended to use gpt-4o as the model for graph generation as it supports function calling. Integuru will automatically switch to o1-preview for code generation if available in the user's OpenAI account.** ⚠️ **Note: o1-preview does not support function calls.**
70
+
71
+
**Ollama support is now available! You can use the Ollama model by specifying `--model ollama` in the command.**
68
72
69
73
## Usage
70
74
@@ -75,7 +79,7 @@ poetry run integuru --help
75
79
Usage: integuru [OPTIONS]
76
80
77
81
Options:
78
-
--model TEXT The LLM model to use (default is gpt-4o)
82
+
--model TEXT The LLM model to use (default is gpt-4o, supports ollama)
79
83
--prompt TEXT The prompt for the model [required]
80
84
--har-path TEXT The HAR file path (default is
81
85
./network_requests.har)
@@ -132,7 +136,7 @@ We open-source unofficial APIs that we've built already. You can find them [here
132
136
Collected data is stored locally in the `network_requests.har` and `cookies.json` files.
133
137
134
138
### LLM Usage
135
-
The tool uses a cloud-based LLM (OpenAI's GPT-4o and o1-preview models).
139
+
The tool uses either cloud-based LLMs (OpenAI's GPT-4o and o1-preview models) or local LLMs (via Ollama).
136
140
137
141
### LLM Training
138
142
The LLM is not trained or improved by the usage of this tool.
0 commit comments