Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/api-inference/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@
title: Image Segmentation
- local: tasks/image-to-image
title: Image to Image
- local: tasks/image-text-to-text
title: Image-Text to Text
- local: tasks/object-detection
title: Object Detection
- local: tasks/question-answering
Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/audio-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ curl https://api-inference.huggingface.co/models/<REPO_ID> \
-X POST \
--data-binary '@sample1.flac' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
4 changes: 2 additions & 2 deletions docs/api-inference/tasks/automatic-speech-recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ For more details about the `automatic-speech-recognition` task, check out its [d
### Recommended models

- [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3): A powerful ASR model by OpenAI.
- [facebook/seamless-m4t-v2-large](https://huggingface.co/facebook/seamless-m4t-v2-large): An end-to-end model that performs ASR and Speech Translation by MetaAI.
- [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1): Powerful speaker diarization model.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
Expand All @@ -45,7 +46,6 @@ curl https://api-inference.huggingface.co/models/openai/whisper-large-v3 \
-X POST \
--data-binary '@sample1.flac' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down Expand Up @@ -108,7 +108,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
| **parameters** | _object_ | Additional inference parameters for Automatic Speech Recognition |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return_timestamps** | _boolean_ | Whether to output corresponding timestamps with the generated text |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generate** | _object_ | Ad-hoc parametrization of the text generation process |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generation_parameters** | _object_ | Ad-hoc parametrization of the text generation process |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;temperature** | _number_ | The value used to modulate the next token probabilities. |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | The number of highest probability vocabulary tokens to keep for top-k-filtering. |
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_p** | _number_ | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. |
Expand Down
8 changes: 2 additions & 6 deletions docs/api-inference/tasks/chat-completion.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,18 +59,15 @@ curl 'https://api-inference.huggingface.co/models/google/gemma-2-2b-it/v1/chat/c
```py
from huggingface_hub import InferenceClient

client = InferenceClient(
"google/gemma-2-2b-it",
token="hf_***",
)
client = InferenceClient(api_key="hf_***")

for message in client.chat_completion(
model="google/gemma-2-2b-it",
messages=[{"role": "user", "content": "What is the capital of France?"}],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")

```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
Expand All @@ -89,7 +86,6 @@ for await (const chunk of inference.chatCompletionStream({
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).
Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/feature-extraction.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ curl https://api-inference.huggingface.co/models/thenlper/gte-large \
-d '{"inputs": "Today is a sunny day and I will get some ice cream."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/fill-mask.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ curl https://api-inference.huggingface.co/models/google-bert/bert-base-uncased \
-d '{"inputs": "The answer to the universe is [MASK]."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/image-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ curl https://api-inference.huggingface.co/models/google/vit-base-patch16-224 \
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/image-segmentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ curl https://api-inference.huggingface.co/models/nvidia/segformer-b0-finetuned-a
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
129 changes: 129 additions & 0 deletions docs/api-inference/tasks/image-text-to-text.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
<!---
This markdown file has been generated from a script. Please do not edit it directly.
For more details, check out:
- the `generate.ts` script: https://github.com/huggingface/hub-docs/blob/main/scripts/api-inference/scripts/generate.ts
- the task template defining the sections in the page: https://github.com/huggingface/hub-docs/tree/main/scripts/api-inference/templates/task/image-text-to-text.handlebars
- the input jsonschema specifications used to generate the input markdown table: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/image-text-to-text/spec/input.json
- the output jsonschema specifications used to generate the output markdown table: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/tasks/image-text-to-text/spec/output.json
- the snippets used to generate the example:
- curl: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/curl.ts
- python: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/python.ts
- javascript: https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/snippets/js.ts
- the "tasks" content for recommended models: https://huggingface.co/api/tasks
--->

## Image-Text to Text

Image-text-to-text models take in an image and text prompt and output text. These models are also called vision-language models, or VLMs. The difference from image-to-text models is that these models take an additional text input, not restricting the model to certain use cases like image captioning, and may also be trained to accept a conversation as input.

<Tip>

For more details about the `image-text-to-text` task, check out its [dedicated page](https://huggingface.co/tasks/image-text-to-text)! You will find examples and related materials.

</Tip>

### Recommended models

- [HuggingFaceM4/idefics2-8b-chatty](https://huggingface.co/HuggingFaceM4/idefics2-8b-chatty): Cutting-edge conversational vision language model that can take multiple image inputs.
- [microsoft/Phi-3.5-vision-instruct](https://huggingface.co/microsoft/Phi-3.5-vision-instruct): Strong image-text-to-text model.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-text-to-text&sort=trending).

### Using the API


<inferencesnippet>

<curl>
```bash
curl https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty \
-X POST \
-d '{"inputs": No input example has been defined for this model task.}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"
```
</curl>

<python>
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty"
headers = {"Authorization": "Bearer hf_***"}

from huggingface_hub import InferenceClient
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not expected (i.e. having import requests ... before from huggingface_hub import InferenceClient). I realized that https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct?inference_api=true has a problem. Model doesn't have a chat template and therefore is not tagged as "conversational" which creates this weird side effect.

So I see 3 independent things to correct here:

  1. it would be nice to recommend meta-llama/Llama-3.2-11B-Vision-Instruct first on the image-text-to-image task page (to update here)
  2. we should fix in moon-landing the "conversational" detection. At the moment, it's based only on the presence of a chat template. However for idefics chatty 8b it seems it's using "use_default_system_prompt": true instead. @Rocketknight1 is it safe to assume that a model with no chat template but this parameter set to True is in fact a conversational model? And if not, which parameter could we check?
  3. For non-conversational image-text-to-text (does that even exist?), we should fix the snippet generator so that only the requests-based snippet is displayed instead of this weird combination.

cc @osanseviero as well for viz'

Copy link
Contributor Author

@hanouticelina hanouticelina Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to add, HuggingFaceM4/idefics2-8b-chatty has thechat_template defined in the processor_config.json. The tokenizer.chat_template attribute is supposed to be saved in tokenizer_config.json file. I guess the template was set using transformers.ProcessorMixin instead.

Copy link
Contributor Author

@hanouticelina hanouticelina Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for the 3rd point, pinging @mishig25 since it's related to huggingface.js/pull/938. do you think it's okay to map image-text-to-text to snippetBasic instead and define the task input here ?


client = InferenceClient(api_key="hf_***")

image_url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"

for message in client.chat_completion(
model="HuggingFaceM4/idefics2-8b-chatty",
messages=[
{
"role": "user",
"content": [
{"type": "image_url", "image_url": {"url": image_url}},
{"type": "text", "text": "Describe this image in one sentence."},
],
}
],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")
```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.image_text-to-text).
</python>

<js>
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/HuggingFaceM4/idefics2-8b-chatty",
{
headers: {
Authorization: "Bearer hf_***"
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.json();
return result;
}

query({"inputs": No input example has been defined for this model task.}).then((response) => {
console.log(JSON.stringify(response));
});
```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#imagetext-to-text).
</js>

</inferencesnippet>



### API specification

#### Request



Some options can be configured by passing headers to the Inference API. Here are the available headers:

| Headers | | |
| :--- | :--- | :--- |
| **authorization** | _string_ | Authentication header in the form `'Bearer: hf_****'` when `hf_****` is a personal user access token with Inference API permission. You can generate one from [your settings page](https://huggingface.co/settings/tokens). |
| **x-use-cache** | _boolean, default to `true`_ | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching [here](../parameters#caching]). |
| **x-wait-for-model** | _boolean, default to `false`_ | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability [here](../overview#eligibility]). |

For more information about Inference API headers, check out the parameters [guide](../parameters).

#### Response



1 change: 0 additions & 1 deletion docs/api-inference/tasks/object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ curl https://api-inference.huggingface.co/models/facebook/detr-resnet-50 \
-X POST \
--data-binary '@cats.jpg' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
2 changes: 1 addition & 1 deletion docs/api-inference/tasks/question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ For more details about the `question-answering` task, check out its [dedicated p

- [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2): A robust baseline model for most question answering domains.
- [distilbert/distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad): Small yet robust model that can answer questions.
- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A special model that can answer questions from tables.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=question-answering&sort=trending).

Expand All @@ -41,7 +42,6 @@ curl https://api-inference.huggingface.co/models/deepset/roberta-base-squad2 \
-d '{"inputs": { "question": "What is my name?", "context": "My name is Clara and I live in Berkeley." }}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/summarization.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ curl https://api-inference.huggingface.co/models/facebook/bart-large-cnn \
-d '{"inputs": "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
8 changes: 4 additions & 4 deletions docs/api-inference/tasks/table-question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ For more details about the `table-question-answering` task, check out its [dedic

### Recommended models

- [google/tapas-base-finetuned-wtq](https://huggingface.co/google/tapas-base-finetuned-wtq): A robust table question answering model.

This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=table-question-answering&sort=trending).

Expand All @@ -34,20 +35,19 @@ This is only a subset of the supported models. Find the model that suits you bes

<curl>
```bash
curl https://api-inference.huggingface.co/models/<REPO_ID> \
curl https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq \
-X POST \
-d '{"inputs": { "query": "How many stars does the transformers repository have?", "table": { "Repository": ["Transformers", "Datasets", "Tokenizers"], "Stars": ["36542", "4512", "3934"], "Contributors": ["651", "77", "34"], "Programming language": [ "Python", "Python", "Rust, Python and NodeJS" ] } }}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

<python>
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/<REPO_ID>"
API_URL = "https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq"
headers = {"Authorization": "Bearer hf_***"}

def query(payload):
Expand Down Expand Up @@ -78,7 +78,7 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/<REPO_ID>",
"https://api-inference.huggingface.co/models/google/tapas-base-finetuned-wtq",
{
headers: {
Authorization: "Bearer hf_***"
Expand Down
1 change: 0 additions & 1 deletion docs/api-inference/tasks/text-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,6 @@ curl https://api-inference.huggingface.co/models/distilbert/distilbert-base-unca
-d '{"inputs": "I like you. I love you"}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"

```
</curl>

Expand Down
65 changes: 30 additions & 35 deletions docs/api-inference/tasks/text-generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,55 +42,50 @@ This is only a subset of the supported models. Find the model that suits you bes

<curl>
```bash
curl https://api-inference.huggingface.co/models/google/gemma-2-2b-it \
-X POST \
-d '{"inputs": "Can you please let us know more details about your "}' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer hf_***"
curl 'https://api-inference.huggingface.co/models/google/gemma-2-2b-it/v1/chat/completions' \
-H "Authorization: Bearer hf_***" \
-H 'Content-Type: application/json' \
-d '{
"model": "google/gemma-2-2b-it",
"messages": [{"role": "user", "content": "What is the capital of France?"}],
"max_tokens": 500,
"stream": false
}'

```
</curl>

<python>
```py
import requests

API_URL = "https://api-inference.huggingface.co/models/google/gemma-2-2b-it"
headers = {"Authorization": "Bearer hf_***"}

def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()

output = query({
"inputs": "Can you please let us know more details about your ",
})
from huggingface_hub import InferenceClient

client = InferenceClient(api_key="hf_***")

for message in client.chat_completion(
model="google/gemma-2-2b-it",
messages=[{"role": "user", "content": "What is the capital of France?"}],
max_tokens=500,
stream=True,
):
print(message.choices[0].delta.content, end="")
```

To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.text_generation).
</python>

<js>
```js
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/google/gemma-2-2b-it",
{
headers: {
Authorization: "Bearer hf_***"
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.json();
return result;
}
import { HfInference } from "@huggingface/inference";

query({"inputs": "Can you please let us know more details about your "}).then((response) => {
console.log(JSON.stringify(response));
});
const inference = new HfInference("hf_***");

for await (const chunk of inference.chatCompletionStream({
model: "google/gemma-2-2b-it",
messages: [{ role: "user", content: "What is the capital of France?" }],
max_tokens: 500,
})) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}
```

To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#textgeneration).
Expand Down
Loading
Loading