You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* started VLM for Ollama:
- new ImageCBlock
- images argument for `m.instruct` and `Instruction`
- `get_images_from_component(c)` method to extract images from components
- formatter handles images now
* VLM for OpenAI backend:
- new formatting from Mellea Message to OpenAI message
- [patch] tool calling patch to work with multiple OpenAI-compatible inference engines
* - ImageCBlock --> ImageBlock
- valid png base64 testing
- better `get_images_from_component`
- adding images to TemplateRepr
- using images from TR for Message construction
* - m.instruct takes list of PIL as well now.
* - m.chat takes images now
* - fixing openai tool args
* LiteLLM uses OPENAI formatting for VLMs.
* better pretty print for Message images
* examples for using vision models with different backends.
* change formatter cases
fix test failure
"all elements of images list must be ImageBlocks."
139
+
)
140
+
iflen(imgs) ==0:
141
+
returnNone
142
+
else:
143
+
returnimgs
144
+
else:
145
+
returnNone
146
+
else:
147
+
returnNone
148
+
149
+
62
150
classModelOutputThunk(CBlock):
63
151
"""A `ModelOutputThunk` is a special type of `CBlock` that we know came from a model's output. It is possible to instantiate one without the output being computed yet."""
0 commit comments