-
Notifications
You must be signed in to change notification settings - Fork 53
feat: enable VLMs #126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: enable VLMs #126
Conversation
- new ImageCBlock - images argument for `m.instruct` and `Instruction` - `get_images_from_component(c)` method to extract images from components - formatter handles images now
- new formatting from Mellea Message to OpenAI message - [patch] tool calling patch to work with multiple OpenAI-compatible inference engines
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
- valid png base64 testing - better `get_images_from_component` - adding images to TemplateRepr - using images from TR for Message construction
jakelorocco
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like a test is failing due to the new message field as well
fix test failure
|
Needs tests, but we will merge now. Opened #128 to track. |
Enable image handling for VLMs on Ollama, OpenAI, LiteLLM backends.
HF and Wx currently only supported through LiteLLM