You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/audio-classification.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,8 +29,9 @@ For more details about the `audio-classification` task, check out its [dedicated
29
29
30
30
### Recommended models
31
31
32
+
-[ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition): An emotion recognition model.
32
33
33
-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
34
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
34
35
35
36
### Using the API
36
37
@@ -39,19 +40,18 @@ This is only a subset of the supported models. Find the model that suits you bes
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
35
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
@@ -108,7 +107,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
108
107
|**inputs***|_string_| The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
109
108
|**parameters**|_object_| Additional inference parameters for Automatic Speech Recognition |
110
109
|** return_timestamps**|_boolean_| Whether to output corresponding timestamps with the generated text |
111
-
|** generate**|_object_| Ad-hoc parametrization of the text generation process |
110
+
|** generation_parameters**|_object_| Ad-hoc parametrization of the text generation process |
112
111
|** temperature**|_number_| The value used to modulate the next token probabilities. |
113
112
|** top_k**|_integer_| The number of highest probability vocabulary tokens to keep for top-k-filtering. |
114
113
|** top_p**|_number_| If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. |
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/chat-completion.md
+97-9Lines changed: 97 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,20 +14,23 @@ For more details, check out:
14
14
15
15
## Chat Completion
16
16
17
-
Generate a response given a list of messages.
18
-
This is a subtask of [`text-generation`](./text_generation) designed to generate responses in a conversational context.
19
-
20
-
17
+
Generate a response given a list of messages in a conversational context, supporting both conversational Language Models (LLMs) and conversational Vision-Language Models (VLMs).
18
+
This is a subtask of [`text-generation`](https://huggingface.co/docs/api-inference/tasks/text-generation) and [`image-text-to-text`](https://huggingface.co/docs/api-inference/tasks/image-text-to-text).
21
19
22
20
### Recommended models
23
21
22
+
#### Conversational Large Language Models (LLMs)
23
+
24
24
-[google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it): A text-generation model trained to follow instructions.
25
25
-[meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions.
26
26
-[microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct): Small yet powerful text generation model.
-[mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407): Very strong open-source large language model.
29
29
30
+
#### Conversational Vision-Language Models (VLMs)
30
31
32
+
-[meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct): Powerful vision language model with great visual understanding and reasoning capabilities.
messages=[{"role": "user", "content": "What is the capital of France?"}],
69
72
max_tokens=500,
70
73
stream=True,
71
74
):
72
75
print(message.choices[0].delta.content, end="")
73
-
74
76
```
75
77
76
78
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
@@ -89,7 +91,93 @@ for await (const chunk of inference.chatCompletionStream({
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).
{"type":"text", "text":"Describe this image in one sentence."},
147
+
],
148
+
}
149
+
],
150
+
max_tokens=500,
151
+
stream=True,
152
+
):
153
+
print(message.choices[0].delta.content, end="")
154
+
```
155
+
156
+
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/feature-extraction.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ For more details about the `feature-extraction` task, check out its [dedicated p
31
31
32
32
-[thenlper/gte-large](https://huggingface.co/thenlper/gte-large): A powerful feature extraction model for natural language processing tasks.
33
33
34
-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=feature-extraction&sort=trending).
34
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=feature-extraction&sort=trending).
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/fill-mask.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ For more details about the `fill-mask` task, check out its [dedicated page](http
27
27
-[google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased): The famous BERT model.
28
28
-[FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base): A multilingual model trained on 100 languages.
29
29
30
-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=fill-mask&sort=trending).
30
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=fill-mask&sort=trending).
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/image-classification.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,9 @@ For more details about the `image-classification` task, check out its [dedicated
25
25
### Recommended models
26
26
27
27
-[google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224): A strong image classification model.
28
+
-[facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224): A robust image classification model.
28
29
29
-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-classification&sort=trending).
30
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-classification&sort=trending).
Copy file name to clipboardExpand all lines: docs/api-inference/tasks/image-segmentation.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ For more details about the `image-segmentation` task, check out its [dedicated p
26
26
27
27
-[nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512): Semantic segmentation model trained on ADE20k benchmark dataset with 512x512 resolution.
28
28
29
-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-segmentation&sort=trending).
29
+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=image-segmentation&sort=trending).
0 commit comments