Skip to content

Commit 141bb3d

Browse files
committed
Merge branch 'Vaibhavs10-patch-1' of https://github.com/huggingface/hub-docs into Vaibhavs10-patch-1
2 parents 275d327 + a25f324 commit 141bb3d

File tree

69 files changed

+1887
-452
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+1887
-452
lines changed
Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
name: Update API Inference Documentation
2+
3+
on:
4+
workflow_dispatch:
5+
schedule:
6+
- cron: "0 3 * * *" # Every day at 3am
7+
8+
concurrency:
9+
group: api_inference_generate_documentation
10+
cancel-in-progress: true
11+
12+
jobs:
13+
pull_request:
14+
runs-on: ubuntu-latest
15+
steps:
16+
# Setup
17+
- uses: actions/checkout@v3
18+
- uses: actions/setup-node@v3
19+
with:
20+
node-version: "20"
21+
- name: Install pnpm
22+
uses: pnpm/action-setup@v2
23+
with:
24+
run_install: |
25+
- recursive: true
26+
cwd: ./scripts/api-inference
27+
args: [--frozen-lockfile]
28+
package_json_file: ./scripts/api-inference/package.json
29+
- name: Update huggingface/tasks package
30+
working-directory: ./scripts/api-inference
31+
run: |
32+
pnpm update @huggingface/tasks@latest
33+
# Generate
34+
- name: Generate API inference documentation
35+
run: pnpm run generate
36+
working-directory: ./scripts/api-inference
37+
38+
# Check changes
39+
- name: Check changes
40+
run: git status
41+
42+
# Create or update Pull Request
43+
- name: Create Pull Request
44+
uses: peter-evans/create-pull-request@v7
45+
with:
46+
token: ${{ secrets.TOKEN_INFERENCE_SYNC_BOT }}
47+
commit-message: Update API inference documentation (automated)
48+
branch: update-api-inference-docs-automated-pr
49+
delete-branch: true
50+
title: "[Bot] Update API inference documentation"
51+
body: |
52+
This PR automatically upgrades the `@huggingface/tasks` package and regenerates the API inference documentation by running:
53+
```sh
54+
cd scripts/api-inference
55+
pnpm update @huggingface/tasks@latest
56+
pnpm run generate
57+
```
58+
59+
This PR was automatically created by the [Update API Inference Documentation workflow](https://github.com/huggingface/hub-docs/blob/main/.github/workflows/api_inference_generate_documentation.yml).
60+
61+
Please review the changes before merging.
62+
reviewers: |
63+
Wauplin
64+
hanouticelina

docs/api-inference/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@
3030
title: Image Segmentation
3131
- local: tasks/image-to-image
3232
title: Image to Image
33+
- local: tasks/image-text-to-text
34+
title: Image-Text to Text
3335
- local: tasks/object-detection
3436
title: Object Detection
3537
- local: tasks/question-answering

docs/api-inference/index.md

Lines changed: 11 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,16 @@ The documentation is organized into two sections:
4646

4747
---
4848

49-
## Looking for custom support from the Hugging Face team?
49+
## Inference Playground
5050

51-
<a target="_blank" href="https://huggingface.co/support">
52-
<img alt="HuggingFace Expert Acceleration Program" src="https://cdn-media.huggingface.co/marketing/transformers/new-support-improved.png" style="max-width: 600px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
51+
If you want to get started quickly with [Chat Completion models](https://huggingface.co/models?inference=warm&other=conversational&sort=trending) use the Inference Playground to quickly text and compare models against your prompts.
52+
53+
<a href="https://huggingface.co/playground" target="blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/9_Tgf0Tv65srhBirZQMTp.png" style="max-width: 550px; width: 100%;"/></a>
54+
55+
---
56+
57+
## Serious about AI in your organisation? Build faster with the Hugging Face Enterprise Hub.
58+
59+
<a target="_blank" href="https://huggingface.co/enterprise">
60+
<img alt="Hugging Face Enterprise Hub" src="https://cdn-uploads.huggingface.co/production/uploads/5f17f0a0925b9863e28ad517/64zNL-65xyIpRqWHe2iD0.png" style="width: 100%; max-width: 550px; border: 1px solid #eee; border-radius: 4px; box-shadow: 0 1px 2px 0 rgba(0, 0, 0, 0.05);">
5361
</a><br>

docs/api-inference/rate-limits.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,10 @@ The Inference API has rate limits based on the number of requests. These rate li
44

55
Serverless API is not meant to be used for heavy production applications. If you need higher rate limits, consider [Inference Endpoints](https://huggingface.co/docs/inference-endpoints) to have dedicated resources.
66

7+
You need to be authenticated (passing a token or through your browser) to use the Inference API.
8+
9+
710
| User Tier | Rate Limit |
811
|---------------------|---------------------------|
9-
| Unregistered Users | 1 request per hour |
10-
| Signed-up Users | 50 requests per hour |
11-
| PRO and Enterprise Users | 500 requests per hour |
12+
| Signed-up Users | 1,000 requests per day |
13+
| PRO and Enterprise Users | 20,000 requests per day |

docs/api-inference/tasks/audio-classification.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,9 @@ For more details about the `audio-classification` task, check out its [dedicated
2929

3030
### Recommended models
3131

32+
- [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition): An emotion recognition model.
3233

33-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
34+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=audio-classification&sort=trending).
3435

3536
### Using the API
3637

@@ -39,19 +40,18 @@ This is only a subset of the supported models. Find the model that suits you bes
3940

4041
<curl>
4142
```bash
42-
curl https://api-inference.huggingface.co/models/<REPO_ID> \
43+
curl https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition \
4344
-X POST \
4445
--data-binary '@sample1.flac' \
4546
-H "Authorization: Bearer hf_***"
46-
4747
```
4848
</curl>
4949

5050
<python>
5151
```py
5252
import requests
5353

54-
API_URL = "https://api-inference.huggingface.co/models/<REPO_ID>"
54+
API_URL = "https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
5555
headers = {"Authorization": "Bearer hf_***"}
5656

5757
def query(filename):
@@ -71,7 +71,7 @@ To use the Python client, see `huggingface_hub`'s [package reference](https://hu
7171
async function query(filename) {
7272
const data = fs.readFileSync(filename);
7373
const response = await fetch(
74-
"https://api-inference.huggingface.co/models/<REPO_ID>",
74+
"https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition",
7575
{
7676
headers: {
7777
Authorization: "Bearer hf_***"
@@ -104,7 +104,7 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
104104
| Payload | | |
105105
| :--- | :--- | :--- |
106106
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
107-
| **parameters** | _object_ | Additional inference parameters for Audio Classification |
107+
| **parameters** | _object_ | |
108108
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;function_to_apply** | _enum_ | Possible values: sigmoid, softmax, none. |
109109
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | When specified, limits the output to the top K most probable classes. |
110110

docs/api-inference/tasks/automatic-speech-recognition.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ For more details about the `automatic-speech-recognition` task, check out its [d
3232
- [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3): A powerful ASR model by OpenAI.
3333
- [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1): Powerful speaker diarization model.
3434

35-
This is only a subset of the supported models. Find the model that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
35+
Explore all available models and find the one that suits you best [here](https://huggingface.co/models?inference=warm&pipeline_tag=automatic-speech-recognition&sort=trending).
3636

3737
### Using the API
3838

@@ -45,7 +45,6 @@ curl https://api-inference.huggingface.co/models/openai/whisper-large-v3 \
4545
-X POST \
4646
--data-binary '@sample1.flac' \
4747
-H "Authorization: Bearer hf_***"
48-
4948
```
5049
</curl>
5150

@@ -65,7 +64,7 @@ def query(filename):
6564
output = query("sample1.flac")
6665
```
6766

68-
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.automatic_speech-recognition).
67+
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.automatic_speech_recognition).
6968
</python>
7069

7170
<js>
@@ -92,7 +91,7 @@ query("sample1.flac").then((response) => {
9291
});
9392
```
9493

95-
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#automaticspeech-recognition).
94+
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#automaticspeechrecognition).
9695
</js>
9796

9897
</inferencesnippet>
@@ -106,9 +105,9 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
106105
| Payload | | |
107106
| :--- | :--- | :--- |
108107
| **inputs*** | _string_ | The input audio data as a base64-encoded string. If no `parameters` are provided, you can also provide the audio data as a raw bytes payload. |
109-
| **parameters** | _object_ | Additional inference parameters for Automatic Speech Recognition |
108+
| **parameters** | _object_ | |
110109
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;return_timestamps** | _boolean_ | Whether to output corresponding timestamps with the generated text |
111-
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generate** | _object_ | Ad-hoc parametrization of the text generation process |
110+
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;generation_parameters** | _object_ | |
112111
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;temperature** | _number_ | The value used to modulate the next token probabilities. |
113112
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_k** | _integer_ | The number of highest probability vocabulary tokens to keep for top-k-filtering. |
114113
| **&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;top_p** | _number_ | If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. |

0 commit comments

Comments
 (0)