Commit b89cc94
authored
Fix broken VLM snippets on the hub. (#1462)
TL;DR:
this is broken:
```diff
# Use a pipeline as a high-level helper
from transformers import pipeline
import torch
pipe = pipeline("image-text-to-text", model="google/gemma-3-4b-it", torch_dtype=torch.bfloat16)
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
-pipe(messages)
+pipe(text=messages)
```
there's some code duplication because `text-generation` models require `
pipe(messages)` which maps to `text_inputs`
@gante is looking for a fix on transformers main but it might awhile
till we actually get this in a version that works with colab.
cc: @merveenoyan @sergiopaniego for vis too.1 parent 4e04a5c commit b89cc94
1 file changed
+2
-1
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1164 | 1164 | | |
1165 | 1165 | | |
1166 | 1166 | | |
| 1167 | + | |
1167 | 1168 | | |
1168 | 1169 | | |
| 1170 | + | |
1169 | 1171 | | |
1170 | | - | |
1171 | 1172 | | |
1172 | 1173 | | |
1173 | 1174 | | |
| |||
0 commit comments