You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hub/models-widgets.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## What's a widget?
4
4
5
-
Many model repos have a widget that allows anyone to run inferences directly in the browser. These widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provide access to hundreds of machine learning models through multiple AI infrastructure providers.
5
+
Many model repos have a widget that allows anyone to run inferences directly in the browser. These widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provide developers streamlined, unified access to hundreds of machine learning models, backed by our serverless inference partners.
6
6
7
7
Here are some examples of current popular models:
8
8
@@ -15,13 +15,13 @@ You can explore more models and their widgets on the [models page](https://huggi
15
15
16
16
## Enabling a widget
17
17
18
-
Widgets are displayed when the model is hosted by one of our partner [Inference Providers](https://huggingface.co/docs/inference-providers), ensuring optimal performance and reliability for the model's inference. Providers autonomously chose and control what models they deploy.
18
+
Widgets are displayed when the model is hosted by at least one Inference Provider, ensuring optimal performance and reliability for the model's inference. Providers autonomously chose and control what models they deploy.
19
19
20
-
The type of widget displayed (conversational, text to image, etc) is inferred from the model's `pipeline_tag`, a special tag that the Hub tries to compute automatically for all models. We choose to expose **only one** widget per model for simplicity.
20
+
The type of widget displayed (text-generation, text to image, etc) is inferred from the model's `pipeline_tag`, a special tag that the Hub tries to compute automatically for all models. The only exception is for the `conversational` widget which is shown on models with a `pipeline_tag` of either `text-generation` or `image-text-to-text`, as long as they’re also tagged as `conversational`. We choose to expose **only one** widget per model for simplicity.
21
21
22
-
For some libraries, such as 🤗 `Transformers`, the model type can be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
22
+
For some libraries, such as `transformers`, the model type can be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
23
23
24
-
For most other use cases, we use the model tag to determine the model task type. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
24
+
For most other use cases, we use the model tags to determine the model task type. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
25
25
26
26
**You can always manually override your pipeline type with `pipeline_tag: xxx` in your [model card metadata](./model-cards#model-card-metadata).** (You can also use the metadata GUI editor to do this).
27
27
@@ -159,7 +159,7 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo
159
159
160
160
## Learn More About Inference Providers
161
161
162
-
Widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which allows you to send HTTP requests to models in the Hugging Face Hub programmatically. It provides unified access to hundreds of machine learning models powered by our serverless inference partners.
162
+
Widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provides unified access to hundreds of machine learning models backed by our serverless inference partners.
163
163
164
164
Key benefits of Inference Providers:
165
165
@@ -176,7 +176,7 @@ Key benefits of Inference Providers:
176
176
Not all models have widgets available. Widget availability depends on:
177
177
178
178
1. **Task Support**: The model's task must be supported by at least one provider in the Inference Providers network
179
-
2. **Provider Availability**: At least one provider must support the specific model
179
+
2. **Provider Availability**: At least one provider must be serving the specific model
180
180
3. **Model Configuration**: The model must have proper metadata and configuration files
181
181
182
182
Current provider support includes:
@@ -190,6 +190,7 @@ Current provider support includes:
190
190
191
191
For models without provider support, you can still showcase functionality using [example outputs](#example-outputs) in your model card.
192
192
193
+
You can also click _Ask for provider support_ directly on the model page to encourage providers to serve the model, given there is enough community interest.
193
194
## Exploring Models with the Inference Playground
194
195
195
196
Before integrating models into your applications, you can test them interactively with the [Inference Playground](https://huggingface.co/playground). The playground allows you to:
0 commit comments