Skip to content

Commit ac1416a

Browse files
committed
feat: implement Evals endpoints and token counting; fix image and embedding models
- README: update API coverage with timestamps; add usage snippets for responses.getInputTokenCounts and Evals; minor conversation example fix - Embeddings: use EncodingFormat enum and send encoding_format correctly; expose enum in model - Evals base + instance: wire full CRUD + runs sub-endpoints; add EndpointInterface; list() method naming; implement client calls and query params - Responses: add getInputTokenCounts() endpoint implementation - Image model: type-safe enums for outputFormat/quality/size; add fromValue() for OpenAIImageSize; parse Usage structures - Eval run data source: implement toMap() and missing branch for 'responses' - Request data source config: add toMap() with @mustBeOverridden and override in subclasses - Conversations: doc comment typo fix Note: Analyzer still reports unrelated example/test issues; library changes compile in isolation.
1 parent ab7d2f8 commit ac1416a

File tree

15 files changed

+417
-111
lines changed

15 files changed

+417
-111
lines changed

README.md

Lines changed: 110 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -93,32 +93,32 @@ print(chatCompletion.choices.first.message.content);
9393

9494
## 📊 API Coverage (2025)
9595

96-
| API feature | Status | Details |
97-
|--------------|--------|----------|
98-
| **📋 [Responses](#-responses)** | ✅ Complete | excluding stream functionality |
99-
| **💭 [Conversations](#-conversations)** | ✅ Complete | All |
100-
| **🎵 [Audio](#-audio)** | ✅ Complete | All |
101-
| **🎬 [Videos](#-videos)** | 🗓️ planned | - |
102-
| **🎨 [Images](#-images)** | ✅ Complete | All |
103-
| **🎨 [Images Streaaming](#-images-streaaming)** | 🗓️ planned | - |
104-
| **📊 [Embeddings](#-embeddings)** | ✅ Complete | All |
105-
| **⚖️ [Evals](#️-evals)** | 🗓️ planned | - |
106-
| **🔧 [Fine-tuning](#-fine-tuning)** | 🧩 70% Complete | missing newer endpoints |
107-
| **📊 [Graders](#-graders)** | ✅ Complete | All |
108-
| **📦 [Batch](#-batch)** | 🗓️ planned | - |
109-
| **📁 [Files](#-files)** | ✅ Complete | All |
110-
| **📤 [Uploads](#-uploads)** | 🗓️ planned | - |
111-
| **🤖 [Models](#-models)** | ✅ Complete | All |
112-
| **🛡️ [Moderation](#️-moderation)** | ✅ Complete | All|
113-
| **🗃️ [Vector Stores](#️-vector-stores)** | 🗓️ planned | - |
114-
| **💬 ChatKit** | ❌ NOt planned | Beta feature |
115-
| **📦 [Containers](#-containers)** | 🗓️ planned | - |
116-
| **🕛 [Real-time](#-real-time)** | 🗓️ planned | - |
117-
| **💬 [Chat Completions](#-chat-completions)** | ✅ Complete | excluding stream functionality |
118-
| **🤖 Assistants** | NOt planned | beta feature |
119-
| **🤖 [Administration](#-administration)** | 🗓️ planned | - |
120-
| **📝 Completions (Legacy)** | ✅ Complete | Create, Stream, Log probabilities |
121-
| **✏️ Edits (Legacy)** | ✅ Complete | Text editing (deprecated by OpenAI) |
96+
| API feature | Status | Details | Last Updated |
97+
|--------------|--------|----------| --------------|
98+
| **📋 [Responses](#-responses)** | ✅ Complete | All | 11-08-2025 17:33:39 |
99+
| **💭 [Conversations](#-conversations)** | ✅ Complete | All | 11-08-2025 17:38:56 |
100+
| **🎵 [Audio](#-audio)** | ✅ Complete | All | 11-08-2025 17:42:54 |
101+
| **🎬 [Videos](#-videos)** | 🗓️ planned | - ||
102+
| **🎨 [Images](#-images)** | ✅ Complete | All | 11-08-2025 17:53:45 |
103+
| **🎨 [Images Streaaming](#-images-streaaming)** | 🗓️ planned | - ||
104+
| **📊 [Embeddings](#-embeddings)** | ✅ Complete | All | 11-08-2025 17:56:30 |
105+
| **⚖️ [Evals](#️-evals)** | ✅ Complete | All | 11-08-2025 21:04:36 |
106+
| **🔧 [Fine-tuning](#-fine-tuning)** | 🧩 70% Complete | missing newer endpoints ||
107+
| **📊 [Graders](#-graders)** | ✅ Complete | All ||
108+
| **📦 [Batch](#-batch)** | 🗓️ planned | - ||
109+
| **📁 [Files](#-files)** | ✅ Complete | All ||
110+
| **📤 [Uploads](#-uploads)** | 🗓️ planned | - ||
111+
| **🤖 [Models](#-models)** | ✅ Complete | All ||
112+
| **🛡️ [Moderation](#️-moderation)** | ✅ Complete | All||
113+
| **🗃️ [Vector Stores](#️-vector-stores)** | 🗓️ planned | - ||
114+
| **💬 ChatKit** | ❌ NOt planned | Beta feature ||
115+
| **📦 [Containers](#-containers)** | 🗓️ planned | - ||
116+
| **🕛 [Real-time](#-real-time)** | 🗓️ planned | - ||
117+
| **💬 [Chat Completions](#-chat-completions)** | ✅ Complete | excluding stream functionality ||
118+
| **🤖 Assistants** | NOt planned | beta feature ||
119+
| **🤖 [Administration](#-administration)** | 🗓️ planned | - ||
120+
| **📝 Completions (Legacy)** | ✅ Complete | Create, Stream, Log probabilities ||
121+
| **✏️ Edits (Legacy)** | ✅ Complete | Text editing (deprecated by OpenAI) ||
122122

123123
---
124124

@@ -146,12 +146,6 @@ await OpenAI.instance.responses.delete(
146146
responseId: "response-id-here",
147147
);
148148
149-
// Update response
150-
OpenAIResponseModel updatedResponse = await OpenAI.instance.responses.update(
151-
"response-id",
152-
// ... update parameters
153-
);
154-
155149
// Cancel response
156150
OpenAiResponse response = await OpenAI.instance.responses.cancel(
157151
responseId: "response-id-here",
@@ -163,16 +157,24 @@ OpenAiResponseInputItemsList response = await OpenAI.instance.responses.listInpu
163157
limit: 10,
164158
);
165159
160+
161+
// Get input token counts
162+
int inputTokens = await OpenAI.instance.responses.getInputTokenCounts(
163+
model: "gpt-5",
164+
input: "Your input text here",
165+
);
166166
```
167167

168168
#### 💭 Conversations
169169

170170
```dart
171171
// Create conversation
172172
OpenAIConversation conversation = await OpenAI.instance.conversations.create(
173-
items: [
174-
// ...
175-
],
173+
items: [{
174+
"type": "message",
175+
"role": "user",
176+
"content": "Hello!",
177+
}],
176178
metadata: {
177179
"key": "value",
178180
"another_key": "another_value",
@@ -318,7 +320,79 @@ OpenAIEmbeddingsModel embedding = await OpenAI.instance.embedding.create(
318320

319321
#### ⚖️ Evals
320322

321-
// (To be implemented)
323+
```dart
324+
// Create eval
325+
OpenAIEval eval = await OpenAI.instance.evals.create(
326+
dataSourceConfig: RequestDatatSourceConfig.logs(),
327+
);
328+
329+
// Get eval
330+
OpenAIEval eval = await OpenAI.instance.evals.get(
331+
evalId: "eval-id-here",
332+
);
333+
334+
// Update eval
335+
OpenAIEval updatedEval = await OpenAI.instance.evals.update(
336+
evalId: "eval-id-here",
337+
metadata: {
338+
"key": "new_value",
339+
},
340+
);
341+
342+
// Delete eval
343+
await OpenAI.instance.evals.delete(
344+
evalId: "eval-id-here",
345+
);
346+
347+
// List evals
348+
OpenAIEvalsList evalsList = await OpenAI.instance.evals.list(
349+
limit: 10,
350+
);
351+
352+
// Get eval runs.
353+
OpenAIEvalRunsList evalRuns = await OpenAI.instance.evals.getRuns(
354+
evalId: "eval-id-here",
355+
limit: 3,
356+
);
357+
358+
// Get Eval run
359+
OpenAIEvalRun evalRun = await OpenAI.instance.evals.getRun(
360+
evalId: "eval-id-here",
361+
runId: "run-id-here",
362+
);
363+
364+
// Create run
365+
OpenAIEvalRun createdRun = await OpenAI.instance.evals.createRun(
366+
evalId: "eval-id-here",
367+
dataSource: EvalRunDataSource.jsonl(),
368+
);
369+
370+
// Cancel run
371+
OpenAIEvalRun canceledRun = await OpenAI.instance.evals.cancel(
372+
evalId: "eval-id-here",
373+
runId: "run-id-here",
374+
);
375+
376+
// Delete run
377+
await OpenAI.instance.evals.deleteRun(
378+
evalId: "eval-id-here",
379+
runId: "run-id-here",
380+
);
381+
382+
// Get output item of eval run.
383+
OpenAIEvalRunOutputItem outputItem = await OpenAI.instance.evals.getEvalRunOutputItem(
384+
evalId: "eval-id-here",
385+
runId: "run-id-here",
386+
outputItemIdn: "item-id-here",
387+
);
388+
389+
// Get eval run output items.
390+
OpenAIEvalRunOutputItemsList outputItems = await OpenAI.instance.evals.getEvalRunOutputItems(
391+
evalId: "eval-id-here",
392+
runId: "run-id-here",
393+
limit: 10,
394+
);
395+
```
322396

323397
#### 🔧 Fine-tuning
324398

lib/src/core/base/embeddings/interfaces/create.dart

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ abstract class CreateInterface {
88
required input,
99
String? user,
1010
int? dimensions,
11-
String? encodingFormat,
11+
EncodingFormat? encodingFormat,
1212
http.Client? client,
1313
});
1414
}

lib/src/core/base/evals/evals.dart

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
1-
import 'package:dart_openai/src/core/base/conversations/interfaces/update.dart';
1+
import 'package:dart_openai/src/core/base/entity/interfaces/enpoint.dart';
2+
import 'package:dart_openai/src/core/base/evals/interfaces/update.dart';
23
import 'package:dart_openai/src/core/base/evals/interfaces/cancel.dart';
34
import 'package:dart_openai/src/core/base/evals/interfaces/create.dart';
45
import 'package:dart_openai/src/core/base/evals/interfaces/delete.dart';
56
import 'package:dart_openai/src/core/base/evals/interfaces/get.dart';
67

78
abstract class OpenAIEvalsBase
89
implements
10+
EndpointInterface,
911
CreateInterface,
1012
GetInterface,
1113
UpdateInterface,

lib/src/core/base/evals/interfaces/get.dart

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ abstract class GetInterface {
1010
required String evalId,
1111
});
1212

13-
Future<OpenAIEvalsList> getEvals({
13+
Future<OpenAIEvalsList> list({
1414
String? after,
1515
int? limit,
1616
String? order,
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
import 'package:dart_openai/src/core/models/evals/eval.dart';
2+
3+
abstract class UpdateInterface {
4+
/// Update an existing eval.
5+
Future<OpenAIEval> update({
6+
required String evalId,
7+
Map<String, dynamic>? metadata,
8+
String? name,
9+
});
10+
}

lib/src/core/base/responses/interface/get.dart

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,18 @@ abstract class GetInterface {
1515
int? limit,
1616
String? order,
1717
});
18+
19+
Future<int> getInputTokenCounts(
20+
conversation,
21+
input,
22+
String? instructions,
23+
String? model,
24+
bool? parallelToolCalls,
25+
String? previousResponseId,
26+
reasoning,
27+
text,
28+
toolChoice,
29+
List? tools,
30+
String? truncation,
31+
);
1832
}

lib/src/core/enum.dart

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,23 @@ enum OpenAIImageSize {
1919
return "1024x1792";
2020
}
2121
}
22+
23+
static OpenAIImageSize fromValue(String value) {
24+
switch (value) {
25+
case "256x256":
26+
return OpenAIImageSize.size256;
27+
case "512x512":
28+
return OpenAIImageSize.size512;
29+
case "1024x1024":
30+
return OpenAIImageSize.size1024;
31+
case "1792x1024":
32+
return OpenAIImageSize.size1792Horizontal;
33+
case "1024x1792":
34+
return OpenAIImageSize.size1792Vertical;
35+
default:
36+
throw ArgumentError("Invalid value for OpenAIImageSize: $value");
37+
}
38+
}
2239
}
2340

2441
enum OpenAIImageStyle {
@@ -44,10 +61,11 @@ enum OpenAIImageResponseFormat {
4461
}
4562
}
4663

47-
4864
enum OpenAIImageInputFidelity {
49-
high, low;
65+
high,
66+
low;
5067
}
68+
5169
enum OpenAIAudioTimestampGranularity { word, segment }
5270

5371
enum OpenAIAudioResponseFormat { json, text, srt, verbose_json, vtt }

lib/src/core/models/embedding/embedding.dart

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ import 'sub-models/usage.dart';
66
export 'sub-models/data.dart';
77
export 'sub-models/usage.dart';
88

9+
enum EncodingFormat { float, base64 }
10+
911
/// {@template openai_embeddings_model}
1012
/// This class is used to represent an OpenAI embeddings request.
1113
/// {@endtemplate}

lib/src/core/models/evals/eval_run_data_source.dart

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,6 +78,21 @@ class EvalRunDataSource {
7878
);
7979
}
8080
}
81+
82+
Map<String, dynamic> toMap() {
83+
switch (type) {
84+
case 'jsonl':
85+
return (this as JsonlRunDataSource).toJson();
86+
case 'completions':
87+
return (this as CompletionsRunDataSource).toJson();
88+
case 'responses':
89+
return (this as ResponsesRunDataSource).toJson();
90+
default:
91+
throw UnimplementedError(
92+
'EvalRunDataSource type $type is not implemented',
93+
);
94+
}
95+
}
8196
}
8297

8398
class JsonlRunDataSource extends EvalRunDataSource {

lib/src/core/models/evals/req_data_source_config.dart

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
import 'package:meta/meta.dart';
2+
13
class RequestDatatSourceConfig {
24
final String type;
35

@@ -24,6 +26,13 @@ class RequestDatatSourceConfig {
2426
metadata: metadata,
2527
);
2628
}
29+
30+
@mustBeOverridden
31+
Map<String, dynamic> toMap() {
32+
return {
33+
"type": type,
34+
};
35+
}
2736
}
2837

2938
class RequestCustomDataSourceConfig extends RequestDatatSourceConfig {
@@ -36,6 +45,7 @@ class RequestCustomDataSourceConfig extends RequestDatatSourceConfig {
3645
super.type = "custom",
3746
});
3847

48+
@override
3949
Map<String, dynamic> toMap() {
4050
return {
4151
"type": type,
@@ -54,6 +64,7 @@ class RequestLogsDataSourceConfig extends RequestDatatSourceConfig {
5464
super.type = "logs",
5565
});
5666

67+
@override
5768
Map<String, dynamic> toMap() {
5869
return {
5970
"type": type,

0 commit comments

Comments
 (0)