Allow setting custom LLM for the vision tool#294
Conversation
Defaults to gpt-4o-mini otherwise
|
|
||
| def __init__(self, llm: LLM | None = None, **kwargs): | ||
| super().__init__(**kwargs) | ||
|
|
||
| self._llm = llm | ||
|
|
||
| @property | ||
| def client(self) -> OpenAI: | ||
| """Cached OpenAI client instance.""" | ||
| if self._client is None: | ||
| self._client = OpenAI() | ||
| return self._client | ||
| def llm(self) -> LLM: | ||
| """Default LLM instance.""" | ||
| if self._llm is None: | ||
| self._llm = LLM( | ||
| model="gpt-4o-mini", | ||
| ) | ||
| return self._llm |
There was a problem hiding this comment.
what if user wants to add a specific model ?
lets add a model_name arg then just add that model.
ex) could be sonnet model with vision
There was a problem hiding this comment.
@lorenzejay, they would be able to pass down a whole new LLM... including Sonnet
There was a problem hiding this comment.
@danielfsbarreto did you try initialize a Tool this way
VisionTool(_client=YOUR_MODEL)I think it should works because _client is not a private attribute
| return self._llm | ||
|
|
||
| def _run(self, **kwargs) -> str: | ||
| def _run(self, **kwargs): |
There was a problem hiding this comment.
Tools must return a string whenever possible
|
|
||
| def __init__(self, llm: LLM | None = None, **kwargs): | ||
| super().__init__(**kwargs) | ||
|
|
||
| self._llm = llm | ||
|
|
||
| @property | ||
| def client(self) -> OpenAI: | ||
| """Cached OpenAI client instance.""" | ||
| if self._client is None: | ||
| self._client = OpenAI() | ||
| return self._client | ||
| def llm(self) -> LLM: | ||
| """Default LLM instance.""" | ||
| if self._llm is None: | ||
| self._llm = LLM( | ||
| model="gpt-4o-mini", | ||
| ) | ||
| return self._llm |
There was a problem hiding this comment.
@danielfsbarreto did you try initialize a Tool this way
VisionTool(_client=YOUR_MODEL)I think it should works because _client is not a private attribute
- Added support for setting a custom model identifier with a default of "gpt-4o-mini". - Introduced properties for model management, allowing dynamic updates and resetting of the LLM instance. - Updated the initialization method to accept an optional LLM and model parameter. - Refactored the image processing logic for clarity and efficiency.
|
@lucasgomide can you check the latest 2 commits 🙏🏼 |
|
|
||
| return response.choices[0].message.content | ||
|
|
||
| return response |
There was a problem hiding this comment.
I guess response is not a string instance, right?
| return response | |
| return response.choices[0].message.content |
* Allow setting custom LLM for the vision tool Defaults to gpt-4o-mini otherwise * Enhance VisionTool with model management and improved initialization - Added support for setting a custom model identifier with a default of "gpt-4o-mini". - Introduced properties for model management, allowing dynamic updates and resetting of the LLM instance. - Updated the initialization method to accept an optional LLM and model parameter. - Refactored the image processing logic for clarity and efficiency. * docstrings * Add stop config --------- Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
Defaults to gpt-4o-mini otherwise