Skip to content

Commit 91d175e

Browse files
committed
Update gladia details in speech-to-text sections
1 parent 37eb5c5 commit 91d175e

File tree

4 files changed

+6
-4
lines changed

4 files changed

+6
-4
lines changed

fern/assistants/examples/multilingual-agent.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1155,6 +1155,7 @@ For a more structured approach with explicit language selection, see our compreh
11551155
## Provider Support Summary
11561156

11571157
**Speech-to-Text (Transcription):**
1158+
- **Gladia**: Solaria, automatic language detection and code-switching.
11581159
- **Deepgram**: Nova 2, Nova 3 with "Multi" language setting
11591160
- **Google**: Latest models with "Multilingual" language setting
11601161
- **All other providers**: Single language only, no automatic detection

fern/customization/multilingual.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,9 @@ Set up your transcriber to automatically detect and process multiple languages.
2929
2. Create a new assistant or edit an existing one
3030
3. In the **Transcriber** section:
3131
- **Provider**: Select `Deepgram` (recommended), `Google`, or `Gladia`
32-
- **Model**: For Deepgram, choose `Nova 2` or `Nova 3`; for Google, choose `Latest`; for Gladia, choose your preferred Gladia model
33-
- **Language / Mode**: Set `Multi` (Deepgram), `Multilingual` (Google), or enable automatic language detection (Gladia)
34-
4. **Other providers**: May require a single language and not auto-detect
32+
- **Model**: For Deepgram, choose `Nova 2` or `Nova 3`; for Google, choose `Latest`; for Gladia, choose `Solaria`
33+
- **Language / Mode**: Set `Multi` (Deepgram), `Multilingual` (Google), or choose the language you want to transcribe (Gladia)
34+
4. **Other providers**: May require a single languages and not auto-detect
3535
5. Click **Save** to apply the configuration
3636
</Tab>
3737
<Tab title="TypeScript (Server SDK)">

fern/debugging.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@ Start with these immediate checks before diving deeper:
8383
- [Anthropic Status](https://status.anthropic.com/) for Anthropic language models
8484
- [ElevenLabs Status](https://status.elevenlabs.io/) for ElevenLabs voice synthesis
8585
- [Deepgram Status](https://status.deepgram.com/) for Deepgram speech-to-text
86+
- [Gladia Status](https://status.gladia.io/) for Gladia speech-to-text
8687
- And other providers' status pages as needed
8788
</Step>
8889
</Steps>

fern/quickstart/introduction.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Every Vapi assistant combines three core technologies:
3030
</Card>
3131
</CardGroup>
3232

33-
You have full control over each component, with dozens of providers and models to choose from; OpenAI, Anthropic, Google, Deepgram, ElevenLabs, and many, many more.
33+
You have full control over each component, with dozens of providers and models to choose from; OpenAI, Anthropic, Google, Gladia, Deepgram, ElevenLabs, and many, many more.
3434

3535
## Two ways to build voice agents
3636

0 commit comments

Comments
 (0)