Skip to content

Commit 65aa8d6

Browse files
committed
Enhanced Ollama integration with native tool calling support
- Implemented OllamaWrapper class for ChatOpenAI interface compatibility - Added native tool calling support using Ollama's built-in capabilities - Maintained backward compatibility with existing agent code - Updated pyproject.toml to include ollama dependency - Enhanced documentation with Ollama setup and usage instructions - Updated CLI help text to include Ollama model option Addresses GitHub PR Integuru-AI#12 comment about leveraging Ollama's built-in tool calling
1 parent 09bfc39 commit 65aa8d6

File tree

3 files changed

+93
-11
lines changed

3 files changed

+93
-11
lines changed

README.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,9 @@ Let's assume we want to download utility bills:
4040

4141
## Setup
4242

43-
1. Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) and add the `OPENAI_API_KEY` environment variable. (We recommend using an account with access to models that are at least as capable as OpenAI o1-mini. Models on par with OpenAI o1-preview are ideal.)
43+
1. **For OpenAI models**: Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) and add the `OPENAI_API_KEY` environment variable. (We recommend using an account with access to models that are at least as capable as OpenAI o1-mini. Models on par with OpenAI o1-preview are ideal.)
44+
45+
**For Ollama models**: Install and run [Ollama](https://ollama.com/download), then pull a compatible model (e.g., `ollama pull llama3.1`).
4446
2. Install Python requirements via poetry:
4547
```
4648
poetry install
@@ -60,11 +62,13 @@ Let's assume we want to download utility bills:
6062
Log into your platform and perform the desired action (such as downloading a utility bill).
6163
6. Run Integuru:
6264
```
63-
poetry run integuru --prompt "download utility bills" --model <gpt-4o|o3-mini|o1|o1-mini>
65+
poetry run integuru --prompt "download utility bills" --model <gpt-4o|o3-mini|o1|o1-mini|ollama>
6466
```
6567
You can also run it via Jupyter Notebook `main.ipynb`
6668

67-
**Recommended to use gpt-4o as the model for graph generation as it supports function calling. Integuru will automatically switch to o1-preview for code generation if available in the user's OpenAI account.**
69+
**Recommended to use gpt-4o as the model for graph generation as it supports function calling. Integuru will automatically switch to o1-preview for code generation if available in the user's OpenAI account.** ⚠️ **Note: o1-preview does not support function calls.**
70+
71+
**Ollama support is now available! You can use the Ollama model by specifying `--model ollama` in the command.**
6872

6973
## Usage
7074

@@ -75,7 +79,7 @@ poetry run integuru --help
7579
Usage: integuru [OPTIONS]
7680
7781
Options:
78-
--model TEXT The LLM model to use (default is gpt-4o)
82+
--model TEXT The LLM model to use (default is gpt-4o, supports ollama)
7983
--prompt TEXT The prompt for the model [required]
8084
--har-path TEXT The HAR file path (default is
8185
./network_requests.har)
@@ -132,7 +136,7 @@ We open-source unofficial APIs that we've built already. You can find them [here
132136
Collected data is stored locally in the `network_requests.har` and `cookies.json` files.
133137

134138
### LLM Usage
135-
The tool uses a cloud-based LLM (OpenAI's GPT-4o and o1-preview models).
139+
The tool uses either cloud-based LLMs (OpenAI's GPT-4o and o1-preview models) or local LLMs (via Ollama).
136140

137141
### LLM Training
138142
The LLM is not trained or improved by the usage of this tool.

integuru/util/LLM.py

Lines changed: 83 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,83 @@
11
from langchain_openai import ChatOpenAI
2+
from ollama import chat
3+
import json
4+
from typing import Dict, List, Any, Optional
5+
6+
class OllamaWrapper:
7+
"""Wrapper class to make Ollama compatible with ChatOpenAI interface"""
8+
9+
def __init__(self, model: str = "llama3.1", temperature: float = 1.0):
10+
self.model = model
11+
self.temperature = temperature
12+
13+
def invoke(self, prompt: str, functions: Optional[List[Dict]] = None, function_call: Optional[Dict] = None, **kwargs):
14+
"""
15+
Invoke Ollama with function calling support, maintaining ChatOpenAI interface compatibility
16+
"""
17+
messages = [{'role': 'user', 'content': prompt}]
18+
19+
# Convert functions to Ollama tools format if provided
20+
tools = []
21+
if functions:
22+
for func in functions:
23+
# Convert ChatOpenAI function format to Ollama tool format
24+
tool = {
25+
'type': 'function',
26+
'function': {
27+
'name': func['name'],
28+
'description': func['description'],
29+
'parameters': func['parameters']
30+
}
31+
}
32+
tools.append(tool)
33+
34+
# Make the Ollama chat call
35+
if tools:
36+
response = chat(
37+
model=self.model,
38+
messages=messages,
39+
tools=tools
40+
)
41+
else:
42+
response = chat(
43+
model=self.model,
44+
messages=messages
45+
)
46+
47+
# Create a response object that mimics ChatOpenAI's response format
48+
class OllamaResponse:
49+
def __init__(self, ollama_response):
50+
self.content = ollama_response.message.content or ""
51+
self.additional_kwargs = {}
52+
53+
# Convert Ollama tool calls to ChatOpenAI format
54+
if hasattr(ollama_response.message, 'tool_calls') and ollama_response.message.tool_calls:
55+
# Take the first tool call (matching current usage pattern)
56+
tool_call = ollama_response.message.tool_calls[0]
57+
self.additional_kwargs['function_call'] = {
58+
'name': tool_call.function.name,
59+
'arguments': json.dumps(tool_call.function.arguments)
60+
}
61+
62+
return OllamaResponse(response)
263

364
class LLMSingleton:
465
_instance = None
5-
_default_model = "gpt-4o"
66+
_default_model = "gpt-4o"
667
_alternate_model = "o1-preview"
68+
_ollama_model = "ollama"
769

870
@classmethod
971
def get_instance(cls, model: str = None):
1072
if model is None:
1173
model = cls._default_model
12-
13-
if cls._instance is None:
14-
cls._instance = ChatOpenAI(model=model, temperature=1)
74+
75+
if cls._instance is None or (hasattr(cls._instance, 'model') and cls._instance.model != model):
76+
if model == cls._ollama_model:
77+
cls._instance = OllamaWrapper(model="llama3.1", temperature=1)
78+
cls._instance.model = model # Add model attribute for consistency
79+
else:
80+
cls._instance = ChatOpenAI(model=model, temperature=1)
1581
return cls._instance
1682

1783
@classmethod
@@ -28,11 +94,22 @@ def revert_to_default_model(cls):
2894

2995
@classmethod
3096
def switch_to_alternate_model(cls):
31-
"""Returns a ChatOpenAI instance configured for o1-miniss"""
97+
"""Returns a ChatOpenAI instance configured for o1-preview"""
3298
# Create a new instance only if we don't have one yet
33-
cls._instance = ChatOpenAI(model=cls._alternate_model, temperature=1)
99+
if cls._alternate_model == cls._ollama_model:
100+
cls._instance = OllamaWrapper(model="llama3.1", temperature=1)
101+
cls._instance.model = cls._alternate_model # Add model attribute for consistency
102+
else:
103+
cls._instance = ChatOpenAI(model=cls._alternate_model, temperature=1)
34104

35105
return cls._instance
36106

107+
@classmethod
108+
def get_ollama_instance(cls):
109+
"""Returns an Ollama instance"""
110+
cls._instance = OllamaWrapper(model="llama3.1", temperature=1)
111+
cls._instance.model = cls._ollama_model # Add model attribute for consistency
112+
return cls._instance
113+
37114
llm = LLMSingleton()
38115

pyproject.toml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ playwright = "^1.47.0"
1717
networkx = "^3.3"
1818
matplotlib = "^3.9.2"
1919
ipykernel = "^6.29.5"
20+
ollama = "^0.3.3"
2021

2122
[tool.poetry.scripts]
2223
integuru = "integuru.__main__:cli"

0 commit comments

Comments
 (0)