Skip to content

Commit 6caea2c

Browse files
Fix CI bot (#1687)
* Fix CI bot * more logs * remove logs + auth if no auth * rename * add * Update Inference Providers documentation (automated) (#1688) Co-authored-by: Wauplin <[email protected]> --------- Co-authored-by: HuggingFaceInfra <[email protected]> Co-authored-by: Wauplin <[email protected]>
1 parent 97c8973 commit 6caea2c

File tree

11 files changed

+36
-41
lines changed

11 files changed

+36
-41
lines changed

.github/workflows/api_inference_generate_documentation.yml renamed to .github/workflows/generate_inference_providers_documentation.yml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
1-
name: Update Inference Providers Documentation
1+
name: Generate Inference Providers Documentation
22

33
on:
44
workflow_dispatch:
55
schedule:
66
- cron: "0 3 * * *" # Every day at 3am
77

88
concurrency:
9-
group: api_inference_generate_documentation
9+
group: generate_inference_providers_documentation
1010
cancel-in-progress: true
1111

1212
jobs:
@@ -29,7 +29,7 @@ jobs:
2929
- name: Update huggingface/tasks package
3030
working-directory: ./scripts/inference-providers
3131
run: |
32-
pnpm update @huggingface/tasks@latest
32+
pnpm update @huggingface/tasks@latest @huggingface/inference@latest
3333
# Generate
3434
- name: Generate Inference Providers documentation
3535
run: pnpm run generate
@@ -62,14 +62,14 @@ jobs:
6262
delete-branch: true
6363
title: "[Bot] Update Inference Providers documentation"
6464
body: |
65-
This PR automatically upgrades the `@huggingface/tasks` package and regenerates the Inference Providers documentation by running:
65+
This PR automatically upgrades the `@huggingface/tasks` and `@huggingface/inference` packages and regenerates the Inference Providers documentation by running:
6666
```sh
6767
cd scripts/inference-providers
68-
pnpm update @huggingface/tasks@latest
68+
pnpm update @huggingface/tasks@latest @huggingface/inference@latest
6969
pnpm run generate
7070
```
7171
72-
This PR was automatically created by the [Update Inference Providers Documentation workflow](https://github.com/huggingface/hub-docs/blob/main/.github/workflows/api_inference_generate_documentation.yml).
72+
This PR was automatically created by the [Update Inference Providers Documentation workflow](https://github.com/huggingface/hub-docs/blob/main/.github/workflows/generate_inference_providers_documentation.yml).
7373
7474
Please review the changes before merging.
7575
reviewers: |

docs/inference-providers/tasks/audio-classification.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,10 @@ Explore all available models and find the one that suits you best [here](https:/
3838
### Using the API
3939

4040

41-
No snippet available for this task.
41+
<InferenceSnippet
42+
pipeline=audio-classification
43+
providersMapping={ {"hf-inference":{"modelId":"ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition","providerModelId":"ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"}} }
44+
/>
4245

4346

4447

docs/inference-providers/tasks/chat-completion.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ The API supports:
6060

6161
<InferenceSnippet
6262
pipeline=text-generation
63-
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-3.3-70B-Instruct","providerModelId":"llama-3.3-70b"},"fireworks-ai":{"modelId":"Qwen/QwQ-32B","providerModelId":"accounts/fireworks/models/qwq-32b"},"hf-inference":{"modelId":"Qwen/QwQ-32B","providerModelId":"Qwen/QwQ-32B"},"hyperbolic":{"modelId":"Qwen/QwQ-32B","providerModelId":"Qwen/QwQ-32B"},"nebius":{"modelId":"Qwen/QwQ-32B","providerModelId":"Qwen/QwQ-32B-fast"},"novita":{"modelId":"Qwen/QwQ-32B","providerModelId":"qwen/qwq-32b"},"sambanova":{"modelId":"Qwen/QwQ-32B","providerModelId":"QwQ-32B"},"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
63+
providersMapping={ {"cerebras":{"modelId":"meta-llama/Llama-3.3-70B-Instruct","providerModelId":"llama-3.3-70b"},"fireworks-ai":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"accounts/fireworks/models/deepseek-v3-0324"},"hf-inference":{"modelId":"Qwen/QwQ-32B","providerModelId":"Qwen/QwQ-32B"},"hyperbolic":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324"},"nebius":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek-ai/DeepSeek-V3-0324-fast"},"novita":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"deepseek/deepseek-v3-0324"},"sambanova":{"modelId":"deepseek-ai/DeepSeek-V3-0324","providerModelId":"DeepSeek-V3-0324"},"together":{"modelId":"deepseek-ai/DeepSeek-R1","providerModelId":"deepseek-ai/DeepSeek-R1"}} }
6464
conversational />
6565

6666

@@ -70,7 +70,7 @@ conversational />
7070

7171
<InferenceSnippet
7272
pipeline=image-text-to-text
73-
providersMapping={ {"fireworks-ai":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"accounts/fireworks/models/llama-v3p2-11b-vision-instruct"},"hf-inference":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/llama-3.2-11b-vision-instruct"},"sambanova":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"Llama-3.2-11B-Vision-Instruct"},"together":{"modelId":"meta-llama/Llama-3.2-11B-Vision-Instruct","providerModelId":"meta-llama/Llama-3.2-11B-Vision-Instruct"}} }
73+
providersMapping={ {"fireworks-ai":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"accounts/fireworks/models/llama4-scout-instruct-basic"},"hf-inference":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it"},"hyperbolic":{"modelId":"Qwen/Qwen2.5-VL-7B-Instruct","providerModelId":"Qwen/Qwen2.5-VL-7B-Instruct"},"nebius":{"modelId":"google/gemma-3-27b-it","providerModelId":"google/gemma-3-27b-it-fast"},"novita":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/llama-4-scout-17b-16e-instruct"},"sambanova":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"Llama-4-Scout-17B-16E-Instruct"},"together":{"modelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct","providerModelId":"meta-llama/Llama-4-Scout-17B-16E-Instruct"}} }
7474
conversational />
7575

7676

docs/inference-providers/tasks/fill-mask.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ Explore all available models and find the one that suits you best [here](https:/
3333

3434
<InferenceSnippet
3535
pipeline=fill-mask
36-
providersMapping={ {"hf-inference":{"modelId":"Rostlab/prot_bert","providerModelId":"Rostlab/prot_bert"}} }
36+
providersMapping={ {"hf-inference":{"modelId":"google-bert/bert-base-uncased","providerModelId":"google-bert/bert-base-uncased"}} }
3737
/>
3838

3939

docs/inference-providers/tasks/image-segmentation.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,10 +32,7 @@ Explore all available models and find the one that suits you best [here](https:/
3232
### Using the API
3333

3434

35-
<InferenceSnippet
36-
pipeline=image-segmentation
37-
providersMapping={ {"hf-inference":{"modelId":"jonathandinu/face-parsing","providerModelId":"jonathandinu/face-parsing"}} }
38-
/>
35+
No snippet available for this task.
3936

4037

4138

docs/inference-providers/tasks/question-answering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Explore all available models and find the one that suits you best [here](https:/
3535

3636
<InferenceSnippet
3737
pipeline=question-answering
38-
providersMapping={ {"hf-inference":{"modelId":"distilbert/distilbert-base-cased-distilled-squad","providerModelId":"distilbert/distilbert-base-cased-distilled-squad"}} }
38+
providersMapping={ {"hf-inference":{"modelId":"deepset/gelectra-large-germanquad","providerModelId":"deepset/gelectra-large-germanquad"}} }
3939
/>
4040

4141

docs/inference-providers/tasks/text-to-video.md

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -55,15 +55,10 @@ Explore all available models and find the one that suits you best [here](https:/
5555
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;seed** | _integer_ | Seed for the random number generator. |
5656

5757

58-
Some options can be configured by passing headers to the Inference API. Here are the available headers:
59-
6058
| Headers | | |
6159
| :--- | :--- | :--- |
62-
| **authorization** | _string_ | Authentication header in the form `'Bearer: hf_****'` when `hf_****` is a personal user access token with Inference API permission. You can generate one from [your settings page](https://huggingface.co/settings/tokens). |
63-
| **x-use-cache** | _boolean, default to `true`_ | There is a cache layer on the inference API to speed up requests we have already seen. Most models can use those results as they are deterministic (meaning the outputs will be the same anyway). However, if you use a nondeterministic model, you can set this parameter to prevent the caching mechanism from being used, resulting in a real new query. Read more about caching [here](../parameters#caching]). |
64-
| **x-wait-for-model** | _boolean, default to `false`_ | If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error, as it will limit hanging in your application to known places. Read more about model availability [here](../overview#eligibility]). |
60+
| **authorization** | _string_ | Authentication header in the form `'Bearer: hf_****'` when `hf_****` is a personal user access token with "Inference Providers" permission. You can generate one from [your settings page](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained). |
6561

66-
For more information about Inference API headers, check out the parameters [guide](../parameters).
6762

6863
#### Response
6964

docs/inference-providers/tasks/token-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Explore all available models and find the one that suits you best [here](https:/
3636

3737
<InferenceSnippet
3838
pipeline=token-classification
39-
providersMapping={ {"hf-inference":{"modelId":"dslim/bert-base-NER","providerModelId":"dslim/bert-base-NER"}} }
39+
providersMapping={ {"hf-inference":{"modelId":"dbmdz/bert-large-cased-finetuned-conll03-english","providerModelId":"dbmdz/bert-large-cased-finetuned-conll03-english"}} }
4040
/>
4141

4242

scripts/inference-providers/package.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,8 @@
1414
"author": "",
1515
"license": "ISC",
1616
"dependencies": {
17-
"@huggingface/inference": "^3.6.1",
18-
"@huggingface/tasks": "^0.18.4",
17+
"@huggingface/inference": "^3.7.1",
18+
"@huggingface/tasks": "^0.18.7",
1919
"@types/node": "^22.5.0",
2020
"handlebars": "^4.7.8",
2121
"node": "^20.17.0",

scripts/inference-providers/pnpm-lock.yaml

Lines changed: 11 additions & 16 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)