Skip to content
Discussion options

You must be logged in to vote

The main thing here is that it isn't an apples-to-apples comparison between Nova-3 batch vs Flux streaming. Nova-3 in batch mode has full utterance context, while Flux streaming has to make low-latency, incremental decisions. That difference can lead to transcription discrepancies, and because the models are optimized for different latency vs. accuracy tradeoffs, we don’t expect full parity between them.

This message was sent by nick kaimakis from Deepgram, via our community automation.

Replies: 5 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by deepgram-community
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant