Skip to content

Commit 29b124b

Browse files
committed
[ Edit ] fixed some issue caught at package deploy, minor changes
1 parent 0c6ab25 commit 29b124b

File tree

4 files changed

+7
-10
lines changed

4 files changed

+7
-10
lines changed

CHANGELOG.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,8 @@
11
# Changelog
22

3-
## 4.1.5
4-
5-
- Removed the exposed field for configuring the package to use fetch_client instead of http_client manually withe is `isWeb` field, in favor of using `dart.library.js` and `dart.library.io` conditional imports to automatically detect the platform and use the appropriate client for it.
6-
73
## 4.1.4
84

5+
- Removed the exposed field for configuring the package to use fetch_client instead of http_client manually withe is `isWeb` field, in favor of using `dart.library.js` and `dart.library.io` conditional imports to automatically detect the platform and use the appropriate client for it.
96
- Exposed field for configuring the package to use fetch_client instead of http_client for making requests in web apps (flutter web, etc..)
107

118
## 4.1.3

README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -171,7 +171,7 @@ OpenAI.showLogs = true;
171171

172172
This will only log the requests steps such when the request started and finished, when the decoding started...
173173

174-
But if you want to log raw responses that are returned from the API (JSON, RAW...), you can set the `showResponsesLogs` to `true`:
174+
But if you want to log raw responses that are returned from the API (JSON, RAW...), you can set the `showResponsesLogs`:
175175

176176
```dart
177177
OpenAI.showResponsesLogs = true;
@@ -605,15 +605,14 @@ to get access to the translation API, and translate an audio file to english, yo
605605
OpenAIAudioModel translation = await OpenAI.instance.audio.createTranslation(
606606
file: File(/* THE FILE PATH*/),
607607
model: "whisper-1",
608-
responseFormat: OpenAIAudioResponseFo rmat.text,
608+
responseFormat: OpenAIAudioResponseFormat.text,
609609
);
610610
611611
// print the translation.
612612
print(translation.text);
613613
```
614614

615-
Learn more from [here](C:\projects\Flutter_and_Dart\openai
616-
).
615+
Learn more from [here](https://platform.openai.com/docs/api-reference/audio/createTranslation).
617616

618617
</br>
619618

lib/src/core/models/edit/sub_models/usage.dart

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ final class OpenAIEditModelUsage {
1818
int get hashCode =>
1919
promptTokens.hashCode ^ completionTokens.hashCode ^ totalTokens.hashCode;
2020

21-
/// {@template openai_edit_model_usage}
21+
/// {@macro openai_edit_model_usage}
2222
const OpenAIEditModelUsage({
2323
required this.promptTokens,
2424
required this.completionTokens,
@@ -27,6 +27,7 @@ final class OpenAIEditModelUsage {
2727

2828
/// {@template openai_edit_model_usage}
2929
/// This method is used to convert a [Map<String, dynamic>] object to a [OpenAIEditModelUsage] object.
30+
/// {@endtemplate}
3031
factory OpenAIEditModelUsage.fromMap(Map<String, dynamic> json) {
3132
return OpenAIEditModelUsage(
3233
promptTokens: json['prompt_tokens'],

pubspec.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
name: dart_openai
22
description: Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.
3-
version: 4.1.5
3+
version: 4.1.4
44
homepage: https://github.com/anasfik/openai
55
repository: https://github.com/anasfik/openai
66
documentation: https://github.com/anasfik/openai/blob/main/README.md

0 commit comments

Comments
 (0)