You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: apps/docs/content/docs/en/tools/stt.mdx
+87-7Lines changed: 87 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,15 +11,32 @@ import { BlockInfoCard } from "@/components/ui/block-info-card"
11
11
/>
12
12
13
13
{/* MANUAL-CONTENT-START:intro */}
14
-
Transcribe speech to text using state-of-the-art AI models from leading providers. The Sim Speech-to-Text (STT) tools allow you to convert audio and video files into accurate transcripts, supporting multiple languages, timestamps, and optional translation.
14
+
Transcribe speech to text using the latest AI models from world-class providers. Sim's Speech-to-Text (STT) tools empower you to turn audio and video into accurate, timestamped, and optionally translated transcripts—supporting a diversity of languages and enhanced with advanced features such as diarization and speaker identification.
15
15
16
-
Supported providers:
16
+
**Supported Providers & Models:**
17
17
18
-
-**[OpenAI Whisper](https://platform.openai.com/docs/guides/speech-to-text/overview)**: Advanced open-source STT model from OpenAI. Supports models such as `whisper-1` and handles a wide variety of languages and audio formats.
19
-
-**[Deepgram](https://deepgram.com/)**: Real-time and batch STT API with deep learning models like `nova-3`, `nova-2`, and `whisper-large`. Offers features like diarization, intent recognition, and industry-specific tuning.
20
-
-**[ElevenLabs](https://elevenlabs.io/)**: Known for high-quality speech AI, ElevenLabs provides STT models focused on accuracy and natural language understanding for numerous languages and dialects.
OpenAI’s Whisper is an open-source deep learning model renowned for its robustness across languages and audio conditions. It supports advanced models such as `whisper-1`, excelling in transcription, translation, and tasks demanding high model generalization. Backed by OpenAI—the company known for ChatGPT and leading AI research—Whisper is widely used in research and as a baseline for comparative evaluation.
21
20
22
-
Choose the provider and model best suited to your task—whether fast, production-grade transcription (Deepgram), highly accurate multi-language capability (Whisper), or advanced understanding and language coverage (ElevenLabs).
Based in San Francisco, Deepgram offers scalable, production-grade speech recognition APIs for developers and enterprises. Deepgram’s models include `nova-3`, `nova-2`, and `whisper-large`, offering real-time and batch transcription with industry-leading accuracy, multi-language support, automatic punctuation, intelligent diarization, call analytics, and features for use cases ranging from telephony to media production.
A leader in voice AI, ElevenLabs is especially known for premium voice synthesis and recognition. Its STT product delivers high-accuracy, natural understanding of numerous languages, dialects, and accents. Recent ElevenLabs STT models are optimized for clarity, speaker distinction, and are suitable for both creative and accessibility scenarios. ElevenLabs is recognized for cutting-edge advancements in AI-powered speech technologies.
AssemblyAI provides API-driven, highly accurate speech recognition, with features such as auto chaptering, topic detection, summarization, sentiment analysis, and content moderation alongside transcription. Its proprietary model, including the acclaimed `Conformer-2`, powers some of the largest media, call center, and compliance applications in the industry. AssemblyAI is trusted by Fortune 500s and leading AI startups globally.
Google’s enterprise-grade Speech-to-Text API supports over 125 languages and variants, offering high accuracy and features such as real-time streaming, word-level confidence, speaker diarization, automatic punctuation, custom vocabulary, and domain-specific tuning. Models such as `latest_long`, `video`, and domain-optimized models are available, powered by Google’s years of research and deployed for global scalability.
32
+
33
+
-**[AWS Transcribe](https://aws.amazon.com/transcribe/)** (Amazon Web Services):
34
+
AWS Transcribe leverages Amazon’s cloud infrastructure to deliver robust speech recognition as an API. It supports multiple languages and features such as speaker identification, custom vocabulary, channel identification (for call center audio), and medical-specific transcription. Popular models include `standard` and domain-specific variations. AWS Transcribe is ideal for organizations already using Amazon’s cloud.
35
+
36
+
**How to Choose:**
37
+
Select the provider and model that fits your application—whether you need fast, enterprise-ready transcription with extra analytics (Deepgram, AssemblyAI, Google, AWS), high versatility and open-source access (OpenAI Whisper), or advanced speaker/contextual understanding (ElevenLabs). Consider the pricing, language coverage, accuracy, and any special features (like summarization, chaptering, or sentiment analysis) you might need.
38
+
39
+
For more details on capabilities, pricing, feature highlights, and fine-tuning options, refer to each provider’s official documentation via the links above.
23
40
{/* MANUAL-CONTENT-END */}
24
41
25
42
@@ -48,6 +65,8 @@ Transcribe audio to text using OpenAI Whisper
48
65
|`language`| string | No | Language code \(e.g., "en", "es", "fr"\) or "auto" for auto-detection |
49
66
|`timestamps`| string | No | Timestamp granularity: none, sentence, or word |
50
67
|`translateToEnglish`| boolean | No | Translate audio to English |
68
+
|`prompt`| string | No | Optional text to guide the model's style or continue a previous audio segment. Helps with proper nouns and context. |
69
+
|`temperature`| number | No | Sampling temperature between 0 and 1. Higher values make output more random, lower values more focused and deterministic. |
51
70
52
71
#### Output
53
72
@@ -57,7 +76,6 @@ Transcribe audio to text using OpenAI Whisper
57
76
|`segments`| array | Timestamped segments |
58
77
|`language`| string | Detected or specified language |
59
78
|`duration`| number | Audio duration in seconds |
60
-
|`confidence`| number | Overall confidence score |
61
79
62
80
### `stt_deepgram`
63
81
@@ -114,6 +132,68 @@ Transcribe audio to text using ElevenLabs
114
132
|`duration`| number | Audio duration in seconds |
115
133
|`confidence`| number | Overall confidence score |
116
134
135
+
### `stt_assemblyai`
136
+
137
+
Transcribe audio to text using AssemblyAI with advanced NLP features
0 commit comments