1- # NEW: ChatGPT API is [ added] ( #chat-chatgpt ) to the library and can be used directly.
1+ # NEW: ChatGPT & Whisper APIs are [ added] ( #chat-chatgpt ) to the library and can be used directly.
22
33<br >
44<p align =" center " >
@@ -46,7 +46,7 @@ Please note that this client SDK connects directly to [OpenAI APIs](https://plat
4646- Developer-friendly.
4747- ` Stream ` functionality for completions API & fine-tune events API.
4848
49- ## 👑 Code Progress (94 %)
49+ ## 👑 Code Progress (100 %)
5050
5151- [x] [ Authentication] ( #authentication )
5252- [x] [ Models] ( #models )
@@ -57,13 +57,13 @@ Please note that this client SDK connects directly to [OpenAI APIs](https://plat
5757- [x] [ Edits] ( #edits )
5858- [x] [ Images] ( #images )
5959- [x] [ Embeddings] ( #embeddings )
60- - [ ] [ Audio] ( #audio )
60+ - [x ] [ Audio] ( #audio )
6161- [x] [ Files] ( #files )
6262- [x] [ Fine-tunes] ( #fine-tunes )
6363 - [x] With events ` Stream ` responses.
6464- [x] [ Moderation] ( #moderations )
6565
66- ## 💫 Testing Progress (94 %)
66+ ## 💫 Testing Progress (100 %)
6767
6868- [x] [ Authentication] ( #authentication )
6969- [x] [ Models] ( #models )
@@ -72,7 +72,7 @@ Please note that this client SDK connects directly to [OpenAI APIs](https://plat
7272- [x] [ Edits] ( #edits )
7373- [x] [ Images] ( #images )
7474- [x] [ Embeddings] ( #embeddings )
75- - [ ] [ Audio] ( #audio )
75+ - [x ] [ Audio] ( #audio )
7676- [x] [ Files] ( #files )
7777- [x] [ Fine-tunes] ( #fine-tunes )
7878- [x] [ Moderation] ( #moderations ) </br >
@@ -244,6 +244,8 @@ print(chatStreamEvent); // ...
244244 })
245245```
246246
247+ </br >
248+
247249## Edits
248250
249251### Create edit
@@ -328,6 +330,32 @@ OpenAIEmbeddingsModel embeddings = await OpenAI.instance.embedding.create(
328330
329331</br >
330332
333+ # Audio
334+
335+ ## Create transcription
336+
337+ for transcribing an audio ` File ` , you can use the ` createTranscription() ` method directly by providing the ` file ` property:
338+
339+ ``` dart
340+ OpenAIAudioModel transcription = OpenAI.instance.audio.createTranscription(
341+ file: /* THE AUDIO FILE HERE */,
342+ model: "whisper-1",
343+ );
344+ ```
345+
346+ ## Create translation
347+
348+ to get access to the translation API, and translate an audio file to english, you can use the ` createTranslation() ` method, by providing the `file`` property:
349+
350+ ``` dart
351+ OpenAIAudioModel translation = await OpenAI.instance.audio.createTranslation(
352+ file: /* THE AUDIO FILE HERE */,
353+ model: "whisper-1",
354+ );
355+ ```
356+
357+ </br >
358+
331359## Files
332360
333361Files are used to upload documents that can be used with features like [ Fine-tuning] ( #fine-tunes ) .
0 commit comments