Skip to content

Commit d92998f

Browse files
committed
Merge branch 'improve-inference-providers-documentation' of https://github.com/huggingface/hub-docs into improve-inference-providers-documentation
2 parents cd49e24 + 26f9027 commit d92998f

File tree

1 file changed

+13
-15
lines changed

1 file changed

+13
-15
lines changed

docs/inference-providers/guides/building-first-app.md

Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,10 @@ We'll also need to implement the `transcribe` and `summarize` functions.
177177
<hfoptions id="transcription">
178178
<hfoption id="python">
179179
180-
Now let's implement the transcription using `fal.ai` and OpenAI's `whisper-large-v3` model for fast, reliable speech processing:
180+
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
181+
<Tip>
182+
We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page.
183+
</Tip>
181184
182185
```python
183186
def transcribe_audio(audio_file_path):
@@ -193,12 +196,14 @@ def transcribe_audio(audio_file_path):
193196
return transcript.text
194197
```
195198
196-
Using the `auto` provider will automatically select the best provider for the model we're using.
197199
198200
</hfoption>
199201
<hfoption id="javascript">
200202
201-
We'll use the Hugging Face Inference client with automatic provider selection:
203+
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
204+
<Tip>
205+
We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page.
206+
</Tip>
202207
203208
```javascript
204209
import { InferenceClient } from 'https://esm.sh/@huggingface/inference';
@@ -216,7 +221,6 @@ async function transcribe(file) {
216221
}
217222
```
218223
219-
Using the `auto` provider will automatically select the best provider for the model we're using.
220224
221225
</hfoption>
222226
</hfoptions>
@@ -226,13 +230,8 @@ Using the `auto` provider will automatically select the best provider for the mo
226230
<hfoptions id="summarization">
227231
<hfoption id="python">
228232
229-
Next, we'll use a powerful language model like `deepseek-ai/DeepSeek-R1-0528` from DeepSeek via an Inference Provider.
230-
231-
<Tip>
232-
233-
We'll use the `auto` provider to automatically select the best provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page.
234-
235-
</Tip>
233+
Next, we'll use a powerful language model like `deepseek-ai/DeepSeek-R1-0528` from DeepSeek via an Inference Provider, and just like in the previous step, we'll use the `auto` provider to automatically select the first available provider for the model.
234+
We will define a custom prompt to ensure the output is formatted as a summary with action items and decisions made:
236235
237236
```python
238237
def generate_summary(transcript):
@@ -262,12 +261,12 @@ def generate_summary(transcript):
262261
return response.choices[0].message.content
263262
```
264263

265-
Note, we're also defining a custom summary prompt to ensure the output is formatted as a summary with action items and decisions made.
266264

267265
</hfoption>
268266
<hfoption id="javascript">
269267

270-
We'll use the chat completion API with automatic provider selection again, and define a custom prompt to ensure the output is formatted as a summary with action items and decisions made:
268+
Next, we'll use a powerful language model like `deepseek-ai/DeepSeek-R1-0528` from DeepSeek via an Inference Provider, and just like in the previous step, we'll use the `auto` provider to automatically select the first available provider for the model.
269+
We will define a custom prompt to ensure the output is formatted as a summary with action items and decisions made:
271270

272271
```javascript
273272
async function summarize(transcript) {
@@ -302,7 +301,6 @@ async function summarize(transcript) {
302301
}
303302
```
304303
305-
We're using automatic provider selection which will choose the best available provider for the model.
306304
307305
</hfoption>
308306
</hfoptions>
@@ -405,7 +403,7 @@ To deploy, we'll need to create a new Space and upload our files.
405403
406404
1. **Create a new Space**: Go to [huggingface.co/new-space](https://huggingface.co/new-space)
407405
2. **Choose Gradio SDK** and make it public
408-
3. **Upload your files**: Upload `app.py` and `requirements.txt`
406+
3. **Upload your files**: Upload `app.py`
409407
4. **Add your token**: In Space settings, add `HF_TOKEN` as a secret (get it from [your settings](https://huggingface.co/settings/tokens))
410408
5. **Launch**: Your app will be live at `https://huggingface.co/spaces/your-username/your-space-name`
411409

0 commit comments

Comments
 (0)