Skip to content

Commit e33e9b0

Browse files
Wauplingithub-actions[bot]
authored andcommitted
Update API inference documentation (automated)
1 parent 7b57ea3 commit e33e9b0

19 files changed

+326
-1056
lines changed

docs/api-inference/tasks/audio-classification.md

Lines changed: 3 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -40,54 +40,21 @@ Explore all available models and find the one that suits you best [here](https:/
4040

4141
<curl>
4242
```bash
43-
curl https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition \
44-
-X POST \
45-
--data-binary '@sample1.flac' \
46-
-H "Authorization: Bearer hf_***"
43+
[object Object]
4744
```
4845
</curl>
4946

5047
<python>
5148
```py
52-
import requests
53-
54-
API_URL = "https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition"
55-
headers = {"Authorization": "Bearer hf_***"}
56-
57-
def query(filename):
58-
with open(filename, "rb") as f:
59-
data = f.read()
60-
response = requests.post(API_URL, headers=headers, data=data)
61-
return response.json()
62-
63-
output = query("sample1.flac")
49+
[object Object]
6450
```
6551

6652
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.audio_classification).
6753
</python>
6854

6955
<js>
7056
```js
71-
async function query(filename) {
72-
const data = fs.readFileSync(filename);
73-
const response = await fetch(
74-
"https://api-inference.huggingface.co/models/ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition",
75-
{
76-
headers: {
77-
Authorization: "Bearer hf_***"
78-
"Content-Type": "application/json",
79-
},
80-
method: "POST",
81-
body: data,
82-
}
83-
);
84-
const result = await response.json();
85-
return result;
86-
}
87-
88-
query("sample1.flac").then((response) => {
89-
console.log(JSON.stringify(response));
90-
});
57+
[object Object]
9158
```
9259

9360
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#audioclassification).

docs/api-inference/tasks/automatic-speech-recognition.md

Lines changed: 3 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -41,54 +41,21 @@ Explore all available models and find the one that suits you best [here](https:/
4141

4242
<curl>
4343
```bash
44-
curl https://api-inference.huggingface.co/models/openai/whisper-large-v3 \
45-
-X POST \
46-
--data-binary '@sample1.flac' \
47-
-H "Authorization: Bearer hf_***"
44+
[object Object]
4845
```
4946
</curl>
5047

5148
<python>
5249
```py
53-
import requests
54-
55-
API_URL = "https://api-inference.huggingface.co/models/openai/whisper-large-v3"
56-
headers = {"Authorization": "Bearer hf_***"}
57-
58-
def query(filename):
59-
with open(filename, "rb") as f:
60-
data = f.read()
61-
response = requests.post(API_URL, headers=headers, data=data)
62-
return response.json()
63-
64-
output = query("sample1.flac")
50+
[object Object]
6551
```
6652

6753
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.automatic_speech-recognition).
6854
</python>
6955

7056
<js>
7157
```js
72-
async function query(filename) {
73-
const data = fs.readFileSync(filename);
74-
const response = await fetch(
75-
"https://api-inference.huggingface.co/models/openai/whisper-large-v3",
76-
{
77-
headers: {
78-
Authorization: "Bearer hf_***"
79-
"Content-Type": "application/json",
80-
},
81-
method: "POST",
82-
body: data,
83-
}
84-
);
85-
const result = await response.json();
86-
return result;
87-
}
88-
89-
query("sample1.flac").then((response) => {
90-
console.log(JSON.stringify(response));
91-
});
58+
[object Object]
9259
```
9360

9461
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#automaticspeech-recognition).

docs/api-inference/tasks/chat-completion.md

Lines changed: 6 additions & 91 deletions
Original file line numberDiff line numberDiff line change
@@ -61,50 +61,21 @@ The API supports:
6161

6262
<curl>
6363
```bash
64-
curl 'https://api-inference.huggingface.co/models/google/gemma-2-2b-it/v1/chat/completions' \
65-
-H "Authorization: Bearer hf_***" \
66-
-H 'Content-Type: application/json' \
67-
-d '{
68-
"model": "google/gemma-2-2b-it",
69-
"messages": [{"role": "user", "content": "What is the capital of France?"}],
70-
"max_tokens": 500,
71-
"stream": false
72-
}'
73-
64+
[object Object]
7465
```
7566
</curl>
7667

7768
<python>
7869
```py
79-
from huggingface_hub import InferenceClient
80-
81-
client = InferenceClient(api_key="hf_***")
82-
83-
for message in client.chat_completion(
84-
model="google/gemma-2-2b-it",
85-
messages=[{"role": "user", "content": "What is the capital of France?"}],
86-
max_tokens=500,
87-
stream=True,
88-
):
89-
print(message.choices[0].delta.content, end="")
70+
[object Object],[object Object]
9071
```
9172

9273
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
9374
</python>
9475

9576
<js>
9677
```js
97-
import { HfInference } from "@huggingface/inference";
98-
99-
const inference = new HfInference("hf_***");
100-
101-
for await (const chunk of inference.chatCompletionStream({
102-
model: "google/gemma-2-2b-it",
103-
messages: [{ role: "user", content: "What is the capital of France?" }],
104-
max_tokens: 500,
105-
})) {
106-
process.stdout.write(chunk.choices[0]?.delta?.content || "");
107-
}
78+
[object Object],[object Object]
10879
```
10980

11081
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).
@@ -121,77 +92,21 @@ To use the JavaScript client, see `huggingface.js`'s [package reference](https:/
12192

12293
<curl>
12394
```bash
124-
curl 'https://api-inference.huggingface.co/models/meta-llama/Llama-3.2-11B-Vision-Instruct/v1/chat/completions' \
125-
-H "Authorization: Bearer hf_***" \
126-
-H 'Content-Type: application/json' \
127-
-d '{
128-
"model": "meta-llama/Llama-3.2-11B-Vision-Instruct",
129-
"messages": [
130-
{
131-
"role": "user",
132-
"content": [
133-
{"type": "image_url", "image_url": {"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"}},
134-
{"type": "text", "text": "Describe this image in one sentence."}
135-
]
136-
}
137-
],
138-
"max_tokens": 500,
139-
"stream": false
140-
}'
141-
95+
[object Object]
14296
```
14397
</curl>
14498

14599
<python>
146100
```py
147-
from huggingface_hub import InferenceClient
148-
149-
client = InferenceClient(api_key="hf_***")
150-
151-
image_url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
152-
153-
for message in client.chat_completion(
154-
model="meta-llama/Llama-3.2-11B-Vision-Instruct",
155-
messages=[
156-
{
157-
"role": "user",
158-
"content": [
159-
{"type": "image_url", "image_url": {"url": image_url}},
160-
{"type": "text", "text": "Describe this image in one sentence."},
161-
],
162-
}
163-
],
164-
max_tokens=500,
165-
stream=True,
166-
):
167-
print(message.choices[0].delta.content, end="")
101+
[object Object],[object Object]
168102
```
169103

170104
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion).
171105
</python>
172106

173107
<js>
174108
```js
175-
import { HfInference } from "@huggingface/inference";
176-
177-
const inference = new HfInference("hf_***");
178-
const imageUrl = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg";
179-
180-
for await (const chunk of inference.chatCompletionStream({
181-
model: "meta-llama/Llama-3.2-11B-Vision-Instruct",
182-
messages: [
183-
{
184-
"role": "user",
185-
"content": [
186-
{"type": "image_url", "image_url": {"url": imageUrl}},
187-
{"type": "text", "text": "Describe this image in one sentence."},
188-
],
189-
}
190-
],
191-
max_tokens: 500,
192-
})) {
193-
process.stdout.write(chunk.choices[0]?.delta?.content || "");
194-
}
109+
[object Object],[object Object]
195110
```
196111

197112
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#chatcompletion).

docs/api-inference/tasks/feature-extraction.md

Lines changed: 3 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -40,54 +40,21 @@ Explore all available models and find the one that suits you best [here](https:/
4040

4141
<curl>
4242
```bash
43-
curl https://api-inference.huggingface.co/models/thenlper/gte-large \
44-
-X POST \
45-
-d '{"inputs": "Today is a sunny day and I will get some ice cream."}' \
46-
-H 'Content-Type: application/json' \
47-
-H "Authorization: Bearer hf_***"
43+
[object Object]
4844
```
4945
</curl>
5046

5147
<python>
5248
```py
53-
import requests
54-
55-
API_URL = "https://api-inference.huggingface.co/models/thenlper/gte-large"
56-
headers = {"Authorization": "Bearer hf_***"}
57-
58-
def query(payload):
59-
response = requests.post(API_URL, headers=headers, json=payload)
60-
return response.json()
61-
62-
output = query({
63-
"inputs": "Today is a sunny day and I will get some ice cream.",
64-
})
49+
[object Object]
6550
```
6651

6752
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.feature_extraction).
6853
</python>
6954

7055
<js>
7156
```js
72-
async function query(data) {
73-
const response = await fetch(
74-
"https://api-inference.huggingface.co/models/thenlper/gte-large",
75-
{
76-
headers: {
77-
Authorization: "Bearer hf_***"
78-
"Content-Type": "application/json",
79-
},
80-
method: "POST",
81-
body: JSON.stringify(data),
82-
}
83-
);
84-
const result = await response.json();
85-
return result;
86-
}
87-
88-
query({"inputs": "Today is a sunny day and I will get some ice cream."}).then((response) => {
89-
console.log(JSON.stringify(response));
90-
});
57+
[object Object]
9158
```
9259

9360
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#featureextraction).

docs/api-inference/tasks/fill-mask.md

Lines changed: 3 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -36,54 +36,21 @@ Explore all available models and find the one that suits you best [here](https:/
3636

3737
<curl>
3838
```bash
39-
curl https://api-inference.huggingface.co/models/google-bert/bert-base-uncased \
40-
-X POST \
41-
-d '{"inputs": "The answer to the universe is [MASK]."}' \
42-
-H 'Content-Type: application/json' \
43-
-H "Authorization: Bearer hf_***"
39+
[object Object]
4440
```
4541
</curl>
4642

4743
<python>
4844
```py
49-
import requests
50-
51-
API_URL = "https://api-inference.huggingface.co/models/google-bert/bert-base-uncased"
52-
headers = {"Authorization": "Bearer hf_***"}
53-
54-
def query(payload):
55-
response = requests.post(API_URL, headers=headers, json=payload)
56-
return response.json()
57-
58-
output = query({
59-
"inputs": "The answer to the universe is [MASK].",
60-
})
45+
[object Object]
6146
```
6247

6348
To use the Python client, see `huggingface_hub`'s [package reference](https://huggingface.co/docs/huggingface_hub/package_reference/inference_client#huggingface_hub.InferenceClient.fill_mask).
6449
</python>
6550

6651
<js>
6752
```js
68-
async function query(data) {
69-
const response = await fetch(
70-
"https://api-inference.huggingface.co/models/google-bert/bert-base-uncased",
71-
{
72-
headers: {
73-
Authorization: "Bearer hf_***"
74-
"Content-Type": "application/json",
75-
},
76-
method: "POST",
77-
body: JSON.stringify(data),
78-
}
79-
);
80-
const result = await response.json();
81-
return result;
82-
}
83-
84-
query({"inputs": "The answer to the universe is [MASK]."}).then((response) => {
85-
console.log(JSON.stringify(response));
86-
});
53+
[object Object]
8754
```
8855

8956
To use the JavaScript client, see `huggingface.js`'s [package reference](https://huggingface.co/docs/huggingface.js/inference/classes/HfInference#fillmask).

0 commit comments

Comments
 (0)