You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/inference-providers/guides/building-first-app.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,9 +177,12 @@ We'll also need to implement the `transcribe` and `summarize` functions.
177
177
<hfoptions id="transcription">
178
178
<hfoption id="python">
179
179
180
-
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
180
+
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
181
+
181
182
<Tip>
183
+
182
184
We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page.
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
206
+
Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing.
207
+
204
208
<Tip>
209
+
205
210
We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page.
0 commit comments