From 07f56cd6a333dd79756b1cc0698748c43a7c4293 Mon Sep 17 00:00:00 2001 From: sergiopaniego Date: Fri, 4 Jul 2025 17:08:13 +0200 Subject: [PATCH] Updated Tips to show links and code --- docs/inference-providers/guides/building-first-app.md | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/docs/inference-providers/guides/building-first-app.md b/docs/inference-providers/guides/building-first-app.md index b69d567ee..a8ebd8311 100644 --- a/docs/inference-providers/guides/building-first-app.md +++ b/docs/inference-providers/guides/building-first-app.md @@ -177,9 +177,12 @@ We'll also need to implement the `transcribe` and `summarize` functions. -Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing. +Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing. + + We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page. + ```python @@ -200,9 +203,12 @@ def transcribe_audio(audio_file_path): -Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing. +Now let's implement the transcription using OpenAI's `whisper-large-v3` model for fast, reliable speech processing. + + We'll use the `auto` provider to automatically select the first available provider for the model. You can define your own priority list of providers in the [Inference Providers](https://huggingface.co/settings/inference-providers) page. + ```javascript