Skip to content

Commit 379eaf6

Browse files
SBrandeisWauplin
andauthored
[Hub] Update the widgets docs (#1810)
* refresh widgets docs * Apply suggestions from code review Co-authored-by: Lucain <[email protected]> * more links to inference-providers documentation --------- Co-authored-by: Lucain <[email protected]>
1 parent c0e055b commit 379eaf6

File tree

1 file changed

+57
-59
lines changed

1 file changed

+57
-59
lines changed

docs/hub/models-widgets.md

Lines changed: 57 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -2,23 +2,26 @@
22

33
## What's a widget?
44

5-
Many model repos have a widget that allows anyone to run inferences directly in the browser!
5+
Many model repos have a widget that allows anyone to run inferences directly in the browser. These widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provide developers streamlined, unified access to hundreds of machine learning models, backed by our serverless inference partners.
66

7-
Here are some examples:
8-
* [Named Entity Recognition](https://huggingface.co/spacy/en_core_web_sm?text=My+name+is+Sarah+and+I+live+in+London) using [spaCy](https://spacy.io/).
9-
* [Image Classification](https://huggingface.co/google/vit-base-patch16-224) using [🤗 Transformers](https://github.com/huggingface/transformers)
10-
* [Text to Speech](https://huggingface.co/julien-c/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train) using [ESPnet](https://github.com/espnet/espnet).
11-
* [Sentence Similarity](https://huggingface.co/osanseviero/full-sentence-distillroberta3) using [Sentence Transformers](https://github.com/UKPLab/sentence-transformers).
7+
Here are some examples of current popular models:
128

13-
You can try out all the widgets [here](https://huggingface-widgets.netlify.app/).
9+
- [DeepSeek V3](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) - State-of-the-art open-weights conversational model
10+
- [Flux Kontext](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev) - Open-weights transformer model for image editing
11+
- [Falconsai's NSFW Detection](https://huggingface.co/Falconsai/nsfw_image_detection) - Image content moderation
12+
- [ResembleAI's Chatterbox](https://huggingface.co/ResembleAI/chatterbox) - Production-grade open source text-to-speech model.
13+
14+
You can explore more models and their widgets on the [models page](https://huggingface.co/models?inference_provider=all&sort=trending) or try them interactively in the [Inference Playground](https://huggingface.co/playground).
1415

1516
## Enabling a widget
1617

17-
A widget is automatically created for your model when you upload it to the Hub. To determine which pipeline and widget to display (`text-classification`, `token-classification`, `translation`, etc.), we analyze information in the repo, such as the metadata provided in the model card and configuration files. This information is mapped to a single `pipeline_tag`. We choose to expose **only one** widget per model for simplicity.
18+
Widgets are displayed when the model is hosted by at least one Inference Provider, ensuring optimal performance and reliability for the model's inference. Providers autonomously chose and control what models they deploy.
19+
20+
The type of widget displayed (text-generation, text to image, etc) is inferred from the model's `pipeline_tag`, a special tag that the Hub tries to compute automatically for all models. The only exception is for the `conversational` widget which is shown on models with a `pipeline_tag` of either `text-generation` or `image-text-to-text`, as long as they’re also tagged as `conversational`. We choose to expose **only one** widget per model for simplicity.
1821

19-
For most use cases, we determine the model type from the tags. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
22+
For some libraries, such as `transformers`, the model type can be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
2023

21-
For some libraries, such as 🤗 `Transformers`, the model type should be inferred automatically based from configuration files (`config.json`). The architecture can determine the type: for example, `AutoModelForTokenClassification` corresponds to `token-classification`. If you're interested in this, you can see pseudo-code in [this gist](https://gist.github.com/julien-c/857ba86a6c6a895ecd90e7f7cab48046).
24+
For most other use cases, we use the model tags to determine the model task type. For example, if there is `tag: text-classification` in the [model card metadata](./model-cards), the inferred `pipeline_tag` will be `text-classification`.
2225

2326
**You can always manually override your pipeline type with `pipeline_tag: xxx` in your [model card metadata](./model-cards#model-card-metadata).** (You can also use the metadata GUI editor to do this).
2427

@@ -28,7 +31,12 @@ You can specify the widget input in the model card metadata section:
2831

2932
```yaml
3033
widget:
31-
- text: "Jens Peter Hansen kommer fra Danmark"
34+
- text: "This new restaurant has amazing food and great service!"
35+
example_title: "Positive Review"
36+
- text: "I'm really disappointed with this product. Poor quality and overpriced."
37+
example_title: "Negative Review"
38+
- text: "The weather is nice today."
39+
example_title: "Neutral Statement"
3240
```
3341
3442
You can provide more than one example input. In the examples dropdown menu of the widget, they will appear as `Example 1`, `Example 2`, etc. Optionally, you can supply `example_title` as well.
@@ -40,26 +48,26 @@ You can provide more than one example input. In the examples dropdown menu of th
4048

4149
```yaml
4250
widget:
43-
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy."
44-
example_title: "Sentiment analysis"
45-
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..."
46-
example_title: "Coreference resolution"
47-
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..."
48-
example_title: "Logic puzzles"
49-
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..."
50-
example_title: "Reading comprehension"
51+
- text: "Is this review positive or negative? Review: Best cast iron skillet you will ever buy."
52+
example_title: "Sentiment analysis"
53+
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had ..."
54+
example_title: "Coreference resolution"
55+
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book ..."
56+
example_title: "Logic puzzles"
57+
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night ..."
58+
example_title: "Reading comprehension"
5159
```
5260

53-
Moreover, you can specify non-text example inputs in the model card metadata. Refer [here](./models-widgets-examples) for a complete list of sample input formats for all widget types. For vision & audio widget types, provide example inputs with `src` rather than `text`.
61+
Moreover, you can specify non-text example inputs in the model card metadata. Refer [here](./models-widgets-examples) for a complete list of sample input formats for all widget types. For vision & audio widget types, provide example inputs with `src` rather than `text`.
5462

5563
For example, allow users to choose from two sample audio files for automatic speech recognition tasks by:
5664

5765
```yaml
5866
widget:
59-
- src: https://example.org/somewhere/speech_samples/sample1.flac
60-
example_title: Speech sample 1
61-
- src: https://example.org/somewhere/speech_samples/sample2.flac
62-
example_title: Speech sample 2
67+
- src: https://example.org/somewhere/speech_samples/sample1.flac
68+
example_title: Speech sample 1
69+
- src: https://example.org/somewhere/speech_samples/sample2.flac
70+
example_title: Speech sample 2
6371
```
6472

6573
Note that you can also include example files in your model repository and use
@@ -92,8 +100,7 @@ We provide example inputs for some languages and most widget types in [default-w
92100

93101
As an extension to example inputs, for each widget example, you can also optionally describe the corresponding model output, directly in the `output` property.
94102

95-
This is useful when the model is not yet supported by either the Inference API (for instance, the model library is not yet supported) or any other Inference Provider, so that the model page can still showcase how the model works and what results it gives.
96-
103+
This is useful when the model is not yet supported by Inference Providers, so that the model page can still showcase how the model works and what results it gives.
97104

98105
For instance, for an [automatic-speech-recognition](./models-widgets-examples#automatic-speech-recognition) model:
99106

@@ -109,7 +116,7 @@ widget:
109116
<img class="hidden dark:block" width="450" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/infrence-examples-asr-dark.png"/>
110117
</div>
111118

112-
The `output` property should be a YAML dictionary that represents the Inference API output.
119+
The `output` property should be a YAML dictionary that represents the output format from Inference Providers.
113120

114121
For a model that outputs text, see the example above.
115122

@@ -150,44 +157,35 @@ We can also surface the example outputs in the Hugging Face UI, for instance, fo
150157
<img width="650" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/models-gallery.png"/>
151158
</div>
152159

153-
## What are all the possible task/widget types?
160+
## Widget Availability and Provider Support
154161

155-
You can find all the supported tasks in [pipelines.ts file](https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/pipelines.ts).
162+
Not all models have widgets available. Widget availability depends on:
156163

157-
Here are some links to examples:
164+
1. **Task Support**: The model's task must be supported by at least one provider in the Inference Providers network
165+
2. **Provider Availability**: At least one provider must be serving the specific model
166+
3. **Model Configuration**: The model must have proper metadata and configuration files
158167

159-
- `text-classification`, for instance [`FacebookAI/roberta-large-mnli`](https://huggingface.co/FacebookAI/roberta-large-mnli)
160-
- `token-classification`, for instance [`dbmdz/bert-large-cased-finetuned-conll03-english`](https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english)
161-
- `question-answering`, for instance [`distilbert/distilbert-base-uncased-distilled-squad`](https://huggingface.co/distilbert/distilbert-base-uncased-distilled-squad)
162-
- `translation`, for instance [`google-t5/t5-base`](https://huggingface.co/google-t5/t5-base)
163-
- `summarization`, for instance [`facebook/bart-large-cnn`](https://huggingface.co/facebook/bart-large-cnn)
164-
- `conversational`, for instance [`facebook/blenderbot-400M-distill`](https://huggingface.co/facebook/blenderbot-400M-distill)
165-
- `text-generation`, for instance [`openai-community/gpt2`](https://huggingface.co/openai-community/gpt2)
166-
- `fill-mask`, for instance [`distilbert/distilroberta-base`](https://huggingface.co/distilbert/distilroberta-base)
167-
- `zero-shot-classification` (implemented on top of a nli `text-classification` model), for instance [`facebook/bart-large-mnli`](https://huggingface.co/facebook/bart-large-mnli)
168-
- `table-question-answering`, for instance [`google/tapas-base-finetuned-wtq`](https://huggingface.co/google/tapas-base-finetuned-wtq)
169-
- `sentence-similarity`, for instance [`osanseviero/full-sentence-distillroberta2`](/osanseviero/full-sentence-distillroberta2)
168+
To view the full list of supported tasks, check out [our dedicated documentation page](https://huggingface.co/docs/inference-providers/tasks/index).
170169

171-
## How can I control my model's widget HF-Inference API parameters?
170+
The list of all providers and the tasks they support is available in [this documentation page](https://huggingface.co/docs/inference-providers/index#partners).
172171

173-
Generally, the HF-Inference API for a model uses the default pipeline settings associated with each task. But if you'd like to change the pipeline's default settings and specify additional inference parameters, you can configure the parameters directly through the model card metadata. Refer [here](https://huggingface.co/docs/inference-providers/detailed_parameters) for some of the most commonly used parameters associated with each task.
172+
For models without provider support, you can still showcase functionality using [example outputs](#example-outputs) in your model card.
174173

175-
For example, if you want to specify an aggregation strategy for a NER task in the widget:
174+
You can also click _Ask for provider support_ directly on the model page to encourage providers to serve the model, given there is enough community interest.
176175

177-
```yaml
178-
inference:
179-
parameters:
180-
aggregation_strategy: "none"
181-
```
176+
## Exploring Models with the Inference Playground
182177

183-
Or if you'd like to change the temperature for a summarization task in the widget:
178+
Before integrating models into your applications, you can test them interactively with the [Inference Playground](https://huggingface.co/playground). The playground allows you to:
184179

185-
```yaml
186-
inference:
187-
parameters:
188-
temperature: 0.7
189-
```
190-
191-
Inference Providers allows you to send HTTP requests to models in the Hugging Face Hub programmatically. It is an abstraction layer on top of External providers. ⚡⚡ Learn more about it by reading the [
192-
Inference Providers documentation](/docs/inference-providers).
193-
Finally, you can also deploy all those models to dedicated [Inference Endpoints](https://huggingface.co/docs/inference-endpoints).
180+
- Test different [chat completion models](https://huggingface.co/models?inference_provider=all&sort=trending&other=conversational) with custom prompts
181+
- Compare responses across different models
182+
- Experiment with inference parameters like temperature, max tokens, and more
183+
- Find the perfect model for your specific use case
184+
185+
The playground uses the same Inference Providers infrastructure that powers the widgets, so you can expect similar performance and capabilities when you integrate the models into your own applications.
186+
187+
<div class="flex justify-center">
188+
<a href="https://huggingface.co/playground" target="_blank">
189+
<img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" alt="Inference Playground" style="max-width: 550px; width: 100%; border-radius: 8px;"/>
190+
</a>
191+
</div>

0 commit comments

Comments
 (0)