Skip to content

Commit ed84888

Browse files
SBrandeisWauplin
andauthored
Apply suggestions from code review
Co-authored-by: Lucain <[email protected]>
1 parent e316089 commit ed84888

File tree

1 file changed

+8
-7
lines changed

1 file changed

+8
-7
lines changed

docs/hub/models-widgets.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## What's a widget?
44

5-
Many model repos have a widget that allows anyone to run inferences directly in the browser. These widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provide access to hundreds of machine learning models through multiple AI infrastructure providers.
5+
Many model repos have a widget that allows anyone to run inferences directly in the browser. These widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provide developers streamlined, unified access to hundreds of machine learning models, backed by our serverless inference partners.
66

77
Here are some examples of current popular models:
88

@@ -15,13 +15,13 @@ You can explore more models and their widgets on the [models page](https://huggi
1515

1616
## Enabling a widget
1717

18-
Widgets are displayed when the model is hosted by one of our partner [Inference Providers](https://huggingface.co/docs/inference-providers), ensuring optimal performance and reliability for the model's inference. Providers autonomously chose and control what models they deploy.
18+
Widgets are displayed when the model is hosted by at least one Inference Provider, ensuring optimal performance and reliability for the model's inference. Providers autonomously chose and control what models they deploy.
1919

20-
The type of widget displayed (conversational, text to image, etc) is inferred from the model's `pipeline_tag`, a special tag that the Hub tries to compute automatically for all models. We choose to expose **only one** widget per model for simplicity.
20+
The type of widget displayed (text-generation, text to image, etc) is inferred from the model's `pipeline_tag`, a special tag that the Hub tries to compute automatically for all models. The only exception is for the `conversational` widget which is shown on models with a `pipeline_tag` of either `text-generation` or `image-text-to-text`, as long as they’re also tagged as `conversational`. We choose to expose **only one** widget per model for simplicity.
2121

22-
For some libraries, such as 🤗 `Transformers`, the model type can be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
22+
For some libraries, such as `transformers`, the model type can be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
2323

24-
For most other use cases, we use the model tag to determine the model task type. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
24+
For most other use cases, we use the model tags to determine the model task type. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
2525

2626
**You can always manually override your pipeline type with `pipeline_tag: xxx` in your [model card metadata](./model-cards#model-card-metadata).** (You can also use the metadata GUI editor to do this).
2727

@@ -159,7 +159,7 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo
159159

160160
## Learn More About Inference Providers
161161

162-
Widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which allows you to send HTTP requests to models in the Hugging Face Hub programmatically. It provides unified access to hundreds of machine learning models powered by our serverless inference partners.
162+
Widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provides unified access to hundreds of machine learning models backed by our serverless inference partners.
163163

164164
Key benefits of Inference Providers:
165165

@@ -176,7 +176,7 @@ Key benefits of Inference Providers:
176176
Not all models have widgets available. Widget availability depends on:
177177

178178
1. **Task Support**: The model's task must be supported by at least one provider in the Inference Providers network
179-
2. **Provider Availability**: At least one provider must support the specific model
179+
2. **Provider Availability**: At least one provider must be serving the specific model
180180
3. **Model Configuration**: The model must have proper metadata and configuration files
181181

182182
Current provider support includes:
@@ -190,6 +190,7 @@ Current provider support includes:
190190

191191
For models without provider support, you can still showcase functionality using [example outputs](#example-outputs) in your model card.
192192

193+
You can also click _Ask for provider support_ directly on the model page to encourage providers to serve the model, given there is enough community interest.
193194
## Exploring Models with the Inference Playground
194195

195196
Before integrating models into your applications, you can test them interactively with the [Inference Playground](https://huggingface.co/playground). The playground allows you to:

0 commit comments

Comments
 (0)