Skip to content

Commit bc0dca6

Browse files
committed
Merge remote-tracking branch 'upstream/main'
2 parents 9b2d7a6 + bf3dfa4 commit bc0dca6

23 files changed

+219
-23
lines changed

docs/inference-providers/providers/cohere.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,6 @@ Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
5656

5757
<InferenceSnippet
5858
pipeline=image-text-to-text
59-
providersMapping={ {"cohere":{"modelId":"CohereLabs/aya-vision-8b","providerModelId":"c4ai-aya-vision-8b"} } }
59+
providersMapping={ {"cohere":{"modelId":"CohereLabs/aya-vision-32b","providerModelId":"c4ai-aya-vision-32b"} } }
6060
conversational />
6161

docs/inference-providers/providers/fal-ai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,6 @@ Find out more about Text To Video [here](../tasks/text_to_video).
6464

6565
<InferenceSnippet
6666
pipeline=text-to-video
67-
providersMapping={ {"fal-ai":{"modelId":"Wan-AI/Wan2.1-T2V-14B","providerModelId":"fal-ai/wan-t2v"} } }
67+
providersMapping={ {"fal-ai":{"modelId":"Lightricks/LTX-Video","providerModelId":"fal-ai/ltx-video"} } }
6868
/>
6969

docs/inference-providers/providers/hf-inference.md

Lines changed: 113 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,23 @@ If you are interested in deploying models to a dedicated and autoscaling infrast
3939
## Supported tasks
4040

4141

42+
### Audio Classification
43+
44+
Find out more about Audio Classification [here](../tasks/audio_classification).
45+
46+
<InferenceSnippet
47+
pipeline=audio-classification
48+
providersMapping={ {"hf-inference":{"modelId":"firdhokk/speech-emotion-recognition-with-openai-whisper-large-v3","providerModelId":"firdhokk/speech-emotion-recognition-with-openai-whisper-large-v3"} } }
49+
/>
50+
51+
4252
### Automatic Speech Recognition
4353

4454
Find out more about Automatic Speech Recognition [here](../tasks/automatic_speech_recognition).
4555

4656
<InferenceSnippet
4757
pipeline=automatic-speech-recognition
48-
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3-turbo","providerModelId":"openai/whisper-large-v3-turbo"} } }
58+
providersMapping={ {"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"} } }
4959
/>
5060

5161

@@ -65,7 +75,7 @@ Find out more about Chat Completion (VLM) [here](../tasks/chat-completion).
6575

6676
<InferenceSnippet
6777
pipeline=image-text-to-text
68-
providersMapping={ {"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"} } }
78+
providersMapping={ {"hf-inference":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it"} } }
6979
conversational />
7080

7181

@@ -75,7 +85,77 @@ Find out more about Feature Extraction [here](../tasks/feature_extraction).
7585

7686
<InferenceSnippet
7787
pipeline=feature-extraction
78-
providersMapping={ {"hf-inference":{"modelId":"kyutai/mimi","providerModelId":"kyutai/mimi"} } }
88+
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"} } }
89+
/>
90+
91+
92+
### Fill Mask
93+
94+
Find out more about Fill Mask [here](../tasks/fill_mask).
95+
96+
<InferenceSnippet
97+
pipeline=fill-mask
98+
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"} } }
99+
/>
100+
101+
102+
### Image Classification
103+
104+
Find out more about Image Classification [here](../tasks/image_classification).
105+
106+
<InferenceSnippet
107+
pipeline=image-classification
108+
providersMapping={ {"hf-inference":{"modelId":"Falconsai/nsfw_image_detection","providerModelId":"Falconsai/nsfw_image_detection"} } }
109+
/>
110+
111+
112+
### Image Segmentation
113+
114+
Find out more about Image Segmentation [here](../tasks/image_segmentation).
115+
116+
<InferenceSnippet
117+
pipeline=image-segmentation
118+
providersMapping={ {"hf-inference":{"modelId":"mattmdjaga/segformer_b2_clothes","providerModelId":"mattmdjaga/segformer_b2_clothes"} } }
119+
/>
120+
121+
122+
### Object Detection
123+
124+
Find out more about Object Detection [here](../tasks/object_detection).
125+
126+
<InferenceSnippet
127+
pipeline=object-detection
128+
providersMapping={ {"hf-inference":{"modelId":"facebook/detr-resnet-50","providerModelId":"facebook/detr-resnet-50"} } }
129+
/>
130+
131+
132+
### Question Answering
133+
134+
Find out more about Question Answering [here](../tasks/question_answering).
135+
136+
<InferenceSnippet
137+
pipeline=question-answering
138+
providersMapping={ {"hf-inference":{"modelId":"deepset/roberta-base-squad2","providerModelId":"deepset/roberta-base-squad2"} } }
139+
/>
140+
141+
142+
### Summarization
143+
144+
Find out more about Summarization [here](../tasks/summarization).
145+
146+
<InferenceSnippet
147+
pipeline=summarization
148+
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-cnn","providerModelId":"facebook/bart-large-cnn"} } }
149+
/>
150+
151+
152+
### Table Question Answering
153+
154+
Find out more about Table Question Answering [here](../tasks/table_question_answering).
155+
156+
<InferenceSnippet
157+
pipeline=table-question-answering
158+
providersMapping={ {"hf-inference":{"modelId":"google/tapas-base-finetuned-wtq","providerModelId":"google/tapas-base-finetuned-wtq"} } }
79159
/>
80160

81161

@@ -108,3 +188,33 @@ Find out more about Text To Image [here](../tasks/text_to_image).
108188
providersMapping={ {"hf-inference":{"modelId":"black-forest-labs/FLUX.1-dev","providerModelId":"black-forest-labs/FLUX.1-dev"} } }
109189
/>
110190

191+
192+
### Token Classification
193+
194+
Find out more about Token Classification [here](../tasks/token_classification).
195+
196+
<InferenceSnippet
197+
pipeline=token-classification
198+
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"} } }
199+
/>
200+
201+
202+
### Translation
203+
204+
Find out more about Translation [here](../tasks/translation).
205+
206+
<InferenceSnippet
207+
pipeline=translation
208+
providersMapping={ {"hf-inference":{"modelId":"google-t5/t5-base","providerModelId":"google-t5/t5-base"} } }
209+
/>
210+
211+
212+
### Zero Shot Classification
213+
214+
Find out more about Zero Shot Classification [here](../tasks/zero_shot_classification).
215+
216+
<InferenceSnippet
217+
pipeline=zero-shot-classification
218+
providersMapping={ {"hf-inference":{"modelId":"facebook/bart-large-mnli","providerModelId":"facebook/bart-large-mnli"} } }
219+
/>
220+

docs/inference-providers/providers/nebius.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Find out more about Chat Completion (LLM) [here](../tasks/chat-completion).
4444

4545
<InferenceSnippet
4646
pipeline=text-generation
47-
providersMapping={ {"nebius":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324-fast"} } }
47+
providersMapping={ {"nebius":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"} } }
4848
conversational />
4949

5050

docs/inference-providers/register-as-a-provider.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -457,6 +457,28 @@ class MyNewProviderTaskProviderHelper(TaskProviderHelper):
457457
- Go to [tests/test_inference_providers.py](https://github.com/huggingface/huggingface_hub/blob/main/tests/test_inference_providers.py) and add static tests for overridden methods.
458458

459459

460+
## 6. Add provider documentation
461+
462+
Create a dedicated documentation page for your provider within the Hugging Face documentation. This page should contain a concise description of your provider services, highlight the benefits for users, set expectations regarding performance or features, and include any relevant details such as pricing models or data retention policies. Essentially, provide any information that would be valuable to end users.
463+
464+
Here's how to add your documentation page:
465+
466+
- Provide Your Logo: You can send your logo files (separate light and dark mode versions) directly to us. This is often the simplest way. Alternatively, if you prefer, you can open a PR in the [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images/tree/main/inference-providers/logos) repository. If you choose to open a PR:
467+
* Logos must be in `.png` format.
468+
* Name them `{provider-name}-light.png` and `{provider-name}-dark.png`.
469+
* Please ping `@Wauplin` and `@celinah` on the PR.
470+
- Create the Documentation File:
471+
* Use an existing provider page as a template. For example, check out the template for [Fal AI](https://github.com/huggingface/hub-docs/blob/main/scripts/inference-providers/templates/providers/fal-ai.handlebars).
472+
* The file should be located under `scripts/inference-providers/templates/providers/{your-provider-name}.handlebars`.
473+
- Submit the Documentation PR:
474+
* Add your new `{provider-name}.handlebars` file.
475+
* Update the [partners table](./index#partners) to include your company or product.
476+
* Update the `_toctree.yml` file in the `docs/inference-providers/` directory to include your new documentation page in the "Providers" section, maintaining alphabetical order.
477+
* Update the `scripts/inference-providers/scripts/generate.ts` file to include your provider in the `PROVIDERS_HUB_ORGS` and `PROVIDERS_URLS` constants, maintaining alphabetical order.
478+
* Run `pnpm install` (if you haven't already) and then `pnpm run generate` at the root of the `scripts/inference-providers` repository to generate the documentation.
479+
* Commit all your changes, including the manually edited files (provider page, `_toctree.yml`, partners table) and the files generated by the script.
480+
* When you open the PR, please ping @Wauplin, @SBrandeis, @julien-c, and @hanouticelina for a review. If you need any assistance with these steps, please reach out – we're here to help you!
481+
460482
## FAQ
461483

462484
**Question:** By default, in which order do we list providers in the settings page?

docs/inference-providers/tasks/audio-classification.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,17 @@ For more details about the `audio-classification` task, check out its [dedicated
2929

3030
### Recommended models
3131

32+
- [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition): An emotion recognition model.
3233

3334
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
3435

3536
### Using the API
3637

3738

38-
There are currently no snippet examples for the **audio-classification** task, as no providers support it yet.
39+
<InferenceSnippet
40+
pipeline=audio-classification
41+
providersMapping={ {"hf-inference":{"modelId":"firdhokk/speech-emotion-recognition-with-openai-whisper-large-v3","providerModelId":"firdhokk/speech-emotion-recognition-with-openai-whisper-large-v3"}} }
42+
/>
3943

4044

4145

docs/inference-providers/tasks/automatic-speech-recognition.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Explore all available models and find the one that suits you best [here](https:/
3838

3939
<InferenceSnippet
4040
pipeline=automatic-speech-recognition
41-
providersMapping={ {"fal-ai":{"modelId":"openai/whisper-large-v3","providerModelId":"fal-ai/whisper"},"hf-inference":{"modelId":"openai/whisper-large-v3-turbo","providerModelId":"openai/whisper-large-v3-turbo"}} }
41+
providersMapping={ {"fal-ai":{"modelId":"openai/whisper-large-v3","providerModelId":"fal-ai/whisper"},"hf-inference":{"modelId":"openai/whisper-large-v3","providerModelId":"openai/whisper-large-v3"}} }
4242
/>
4343

4444

docs/inference-providers/tasks/chat-completion.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ This is a subtask of [`text-generation`](https://huggingface.co/docs/inference-p
2424
- [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it): A text-generation model trained to follow instructions.
2525
- [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct): Very powerful text generation model trained to follow instructions.
2626
- [microsoft/phi-4](https://huggingface.co/microsoft/phi-4): Powerful text generation model by Microsoft.
27+
- [Qwen/Qwen2.5-7B-Instruct-1M](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct-1M): Strong conversational model that supports very long instructions.
2728
- [Qwen/Qwen2.5-Coder-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct): Text generation model used to write code.
2829
- [deepseek-ai/DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1): Powerful reasoning based open large language model.
2930

@@ -59,7 +60,7 @@ The API supports:
5960

6061
<InferenceSnippet
6162
pipeline=text-generation
62-
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-3.3-70B-Instruct","providerModelId":"llama-3.3-70b"},"cohere":{"modelId":"CohereLabs/c4ai-command-a-03-2025","providerModelId":"command-a-03-2025"},"fireworks-ai":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"accounts/fireworks/models/qwen3-235b-a22b"},"hf-inference":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"},"hyperbolic":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324"},"nebius":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324-fast"},"novita":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"qwen/qwen3-235b-a22b-fp8"},"sambanova":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"DeepSeek-V3-0324"},"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
63+
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-3.3-70B-Instruct","providerModelId":"llama-3.3-70b"},"cohere":{"modelId":"CohereLabs/c4ai-command-a-03-2025","providerModelId":"command-a-03-2025"},"fireworks-ai":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"accounts/fireworks/models/qwen3-235b-a22b"},"hf-inference":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"},"hyperbolic":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324"},"nebius":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"Qwen/Qwen3-235B-A22B"},"novita":{"modelId":"Qwen/Qwen3-235B-A22B","providerModelId":"qwen/qwen3-235b-a22b-fp8"},"sambanova":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"DeepSeek-V3-0324"},"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
6364
conversational />
6465

6566

@@ -69,7 +70,7 @@ conversational />
6970

7071
<InferenceSnippet
7172
pipeline=image-text-to-text
72-
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"llama-4-scout-17b-16e-instruct"},"cohere":{"modelId":"CohereLabs/aya-vision-8b","providerModelId":"c4ai-aya-vision-8b"},"fireworks-ai":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"accounts/fireworks/models/llama4-scout-instruct-basic"},"hf-inference":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"sambanova":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"Llama-4-Scout-17B-16E-Instruct"},"together":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"}} }
73+
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"llama-4-scout-17b-16e-instruct"},"cohere":{"modelId":"CohereLabs/aya-vision-32b","providerModelId":"c4ai-aya-vision-32b"},"fireworks-ai":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"accounts/fireworks/models/llama4-scout-instruct-basic"},"hf-inference":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"sambanova":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"Llama-4-Scout-17B-16E-Instruct"},"together":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"}} }
7374
conversational />
7475

7576

@@ -120,6 +121,11 @@ conversational />
120121
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(#2)** | _object_ | |
121122
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type*** | _enum_ | Possible values: regex. |
122123
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;value*** | _string_ | |
124+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(#3)** | _object_ | |
125+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;type*** | _enum_ | Possible values: json_schema. |
126+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;value*** | _object_ | |
127+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;name** | _string_ | Optional name identifier for the schema |
128+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;schema*** | _unknown_ | The actual JSON schema definition |
123129
| **seed** | _integer_ | |
124130
| **stop** | _string[]_ | Up to 4 sequences where the API will stop generating further tokens. |
125131
| **stream** | _boolean_ | |

docs/inference-providers/tasks/feature-extraction.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,7 @@ For more details about the `feature-extraction` task, check out its [dedicated p
2929

3030
### Recommended models
3131

32+
- [thenlper/gte-large](https://huggingface.co/thenlper/gte-large): A powerful feature extraction model for natural language processing tasks.
3233

3334
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=feature-extraction&sort=trending).
3435

@@ -37,7 +38,7 @@ Explore all available models and find the one that suits you best [here](https:/
3738

3839
<InferenceSnippet
3940
pipeline=feature-extraction
40-
providersMapping={ {"hf-inference":{"modelId":"kyutai/mimi","providerModelId":"kyutai/mimi"},"sambanova":{"modelId":"intfloat/e5-mistral-7b-instruct","providerModelId":"E5-Mistral-7B-Instruct"}} }
41+
providersMapping={ {"hf-inference":{"modelId":"intfloat/multilingual-e5-large-instruct","providerModelId":"intfloat/multilingual-e5-large-instruct"},"sambanova":{"modelId":"intfloat/e5-mistral-7b-instruct","providerModelId":"E5-Mistral-7B-Instruct"}} }
4142
/>
4243

4344

docs/inference-providers/tasks/fill-mask.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,17 @@ For more details about the `fill-mask` task, check out its [dedicated page](http
2424

2525
### Recommended models
2626

27+
- [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base): A multilingual model trained on 100 languages.
2728

2829
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=fill-mask&sort=trending).
2930

3031
### Using the API
3132

3233

33-
There are currently no snippet examples for the **fill-mask** task, as no providers support it yet.
34+
<InferenceSnippet
35+
pipeline=fill-mask
36+
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"}} }
37+
/>
3438

3539

3640

0 commit comments

Comments
 (0)