Skip to content

Commit 985d09d

Browse files
One way translation - update images (#1735)
1 parent 4100fb9 commit 985d09d

File tree

2 files changed

+11
-7
lines changed

2 files changed

+11
-7
lines changed

examples/voice_solutions/one_way_translation_using_realtime_api.mdx

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,14 @@ A real-world use case for this demo is a multilingual, conversational translatio
1010
Let's explore the main functionalities and code snippets that illustrate how the app works. You can find the code in the [accompanying repo](https://github.com/openai/openai-cookbook/tree/main/examples/voice_solutions/one_way_translation_using_realtime_api/README.md
1111
) if you want to run the app locally.
1212

13-
### High Level Architecture Overview
13+
## High Level Architecture Overview
1414

1515
This project has two applications - a speaker and listener app. The speaker app takes in audio from the browser, forks the audio and creates a unique Realtime session for each language and sends it to the OpenAI Realtime API via WebSocket. Translated audio streams back and is mirrored via a separate WebSocket server to the listener app. The listener app receives all translated audio streams simultaneously, but only the selected language is played. This architecture is designed for a POC and is not intended for a production use case. Let's dive into the workflow!
1616

17-
![Architecture](translation_images/Realtime_flow_diagram.png)
17+
![Architecture](https://github.com/openai/openai-cookbook/blob/main/examples/voice_solutions/translation_images/Realtime_flow_diagram.png?raw=true)
18+
19+
## Step 1: Language & Prompt Setup
1820

19-
### Step 1: Language & Prompt Setup
2021

2122
We need a unique stream for each language - each language requires a unique prompt and session with the Realtime API. We define these prompts in `translation_prompts.js`.
2223

@@ -37,7 +38,8 @@ const languageConfigs = [
3738

3839
## Step 2: Setting up the Speaker App
3940

40-
![SpeakerApp](translation_images/SpeakerApp.png)
41+
![SpeakerApp](https://github.com/openai/openai-cookbook/blob/main/examples/voice_solutions/translation_images/SpeakerApp.png?raw=true)
42+
4143

4244
We need to handle the setup and management of client instances that connect to the Realtime API, allowing the application to process and stream audio in different languages. `clientRefs` holds a map of `RealtimeClient` instances, each associated with a language code (e.g., 'fr' for French, 'es' for Spanish) representing each unique client connection to the Realtime API.
4345

@@ -94,7 +96,8 @@ const connectConversation = useCallback(async () => {
9496
};
9597
```
9698

97-
### Step 3: Audio Streaming
99+
## Step 3: Audio Streaming
100+
98101

99102
Sending audio with WebSockets requires work to manage the inbound and outbound PCM16 audio streams ([more details on that](https://platform.openai.com/docs/guides/realtime-model-capabilities#handling-audio-with-websockets)). We abstract that using wavtools, a library for both recording and streaming audio data in the browser. Here we use `WavRecorder` for capturing audio in the browser.
100103

@@ -114,7 +117,9 @@ const startRecording = async () => {
114117
};
115118
```
116119

117-
### Step 4: Showing Transcripts
120+
121+
## Step 4: Showing Transcripts
122+
118123

119124
We listen for `response.audio_transcript.done` events to update the transcripts of the audio. These input transcripts are generated by the Whisper model in parallel to the GPT-4o Realtime inference that is doing the translations on raw audio.
120125

registry.yaml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1847,4 +1847,3 @@
18471847
- audio
18481848
- speech
18491849

1850-

0 commit comments

Comments
 (0)