Skip to content
This repository was archived by the owner on Nov 10, 2025. It is now read-only.

Allow setting custom LLM for the vision tool#294

Merged
danielfsbarreto merged 4 commits intomainfrom
vision-tool-custom-llm-support
Apr 28, 2025
Merged

Allow setting custom LLM for the vision tool#294
danielfsbarreto merged 4 commits intomainfrom
vision-tool-custom-llm-support

Conversation

@danielfsbarreto
Copy link
Collaborator

Defaults to gpt-4o-mini otherwise

Defaults to gpt-4o-mini otherwise
Comment on lines 40 to 53

def __init__(self, llm: LLM | None = None, **kwargs):
super().__init__(**kwargs)

self._llm = llm

@property
def client(self) -> OpenAI:
"""Cached OpenAI client instance."""
if self._client is None:
self._client = OpenAI()
return self._client
def llm(self) -> LLM:
"""Default LLM instance."""
if self._llm is None:
self._llm = LLM(
model="gpt-4o-mini",
)
return self._llm
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if user wants to add a specific model ?

lets add a model_name arg then just add that model.

ex) could be sonnet model with vision

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lorenzejay, they would be able to pass down a whole new LLM... including Sonnet

Copy link
Contributor

@lucasgomide lucasgomide Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danielfsbarreto did you try initialize a Tool this way

VisionTool(_client=YOUR_MODEL)

I think it should works because _client is not a private attribute

return self._llm

def _run(self, **kwargs) -> str:
def _run(self, **kwargs):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tools must return a string whenever possible

Comment on lines 40 to 53

def __init__(self, llm: LLM | None = None, **kwargs):
super().__init__(**kwargs)

self._llm = llm

@property
def client(self) -> OpenAI:
"""Cached OpenAI client instance."""
if self._client is None:
self._client = OpenAI()
return self._client
def llm(self) -> LLM:
"""Default LLM instance."""
if self._llm is None:
self._llm = LLM(
model="gpt-4o-mini",
)
return self._llm
Copy link
Contributor

@lucasgomide lucasgomide Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danielfsbarreto did you try initialize a Tool this way

VisionTool(_client=YOUR_MODEL)

I think it should works because _client is not a private attribute

- Added support for setting a custom model identifier with a default of "gpt-4o-mini".
- Introduced properties for model management, allowing dynamic updates and resetting of the LLM instance.
- Updated the initialization method to accept an optional LLM and model parameter.
- Refactored the image processing logic for clarity and efficiency.
@lorenzejay
Copy link
Contributor

@lucasgomide can you check the latest 2 commits 🙏🏼


return response.choices[0].message.content

return response
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess response is not a string instance, right?

Suggested change
return response
return response.choices[0].message.content

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch

@danielfsbarreto danielfsbarreto merged commit 83892cd into main Apr 28, 2025
1 check passed
@danielfsbarreto danielfsbarreto deleted the vision-tool-custom-llm-support branch April 28, 2025 22:53
mplachta pushed a commit to mplachta/crewAI-tools that referenced this pull request Aug 27, 2025
* Allow setting custom LLM for the vision tool

Defaults to gpt-4o-mini otherwise

* Enhance VisionTool with model management and improved initialization

- Added support for setting a custom model identifier with a default of "gpt-4o-mini".
- Introduced properties for model management, allowing dynamic updates and resetting of the LLM instance.
- Updated the initialization method to accept an optional LLM and model parameter.
- Refactored the image processing logic for clarity and efficiency.

* docstrings

* Add stop config

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants