-
Notifications
You must be signed in to change notification settings - Fork 53
fix: tests for VLM calls #131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merge ProtectionsYour pull request matches the following merge protections and will not be merged until they are valid. 🟢 Enforce conventional commitWonderful, this rule succeeded.Make sure that we follow https://www.conventionalcommits.org/en/v1.0.0/
|
jakelorocco
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like some of your tests are failing:
test/stdlib_basics/test_vision.py::test_image_block_in_instruction FAILED [ 81%]
test/stdlib_basics/test_vision.py::test_image_block_in_chat FAILED [ 82%]
Because your backend providers are different, it's expecting the images to be in different places. Might be worth making tests explicitly for those scenarios.
|
@jakelorocco I separated the tests for ollama vs openai style image prompts.. they should cover all styles of models for now. |
jakelorocco
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm; hopefully we can switch the gh action model to be a VLM as well and avoid the gh skip
fix for #128