You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Widgets are powered by [Inference Providers](https://huggingface.co/docs/inference-providers), which provides unified access to hundreds of machine learning models backed by our serverless inference partners.
163
-
164
-
Key benefits of Inference Providers:
165
-
166
-
- **Unified API**: Access models from multiple providers (Cerebras, Cohere, Fireworks, Together AI, Replicate, and more) through a single interface
167
-
- **Automatic Provider Selection**: Intelligent routing to the best available provider for your model
168
-
- **Production-Ready**: Built for enterprise workloads with automatic failover and high availability
169
-
- **Cost-Effective**: No extra markup on provider rates
170
-
- **OpenAI-Compatible**: Drop-in replacement for OpenAI chat completions API
171
-
172
-
⚡⚡ Learn more by reading the [Inference Providers documentation](https://huggingface.co/docs/inference-providers). You can also deploy models to dedicated [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) for more control and customization.
173
-
174
160
## Widget Availability and Provider Support
175
161
176
162
Not all models have widgets available. Widget availability depends on:
@@ -179,18 +165,14 @@ Not all models have widgets available. Widget availability depends on:
179
165
2. **Provider Availability**: At least one provider must be serving the specific model
180
166
3. **Model Configuration**: The model must have proper metadata and configuration files
181
167
182
-
Current provider support includes:
168
+
To view the full list of supported tasks, check out [our dedicated documentation page](https://huggingface.co/docs/inference-providers/tasks/index).
The list of all providers and the tasks they support is available in [this documentation page](https://huggingface.co/docs/inference-providers/index#partners).
190
171
191
172
For models without provider support, you can still showcase functionality using [example outputs](#example-outputs) in your model card.
192
173
193
174
You can also click _Ask for provider support_ directly on the model page to encourage providers to serve the model, given there is enough community interest.
175
+
194
176
## Exploring Models with the Inference Playground
195
177
196
178
Before integrating models into your applications, you can test them interactively with the [Inference Playground](https://huggingface.co/playground). The playground allows you to:
0 commit comments