You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/openai/includes/evaluation-rest.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,7 @@ You can create an evaluation by specifying a data source config and the evaluati
18
18
curl -X POST "$AZURE_OPENAI_ENDPOINT/openai/v1/evals?api-version=preview" \
19
19
-H "Content-Type: application/json" \
20
20
-H "api-key: $AZURE_OPENAI_API_KEY" \
21
+
-H "aoai-evals: preview" \
21
22
-d '{
22
23
"name": "Math Quiz",
23
24
"data_source_config": {
@@ -56,6 +57,7 @@ You can add new evaluation runs to the evaluation job you had created in the pre
56
57
curl -X POST "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs?api-version=preview" \
57
58
-H "Content-Type: application/json" \
58
59
-H "api-key: $AZURE_OPENAI_API_KEY"
60
+
-H "aoai-evals: preview" \
59
61
```
60
62
61
63
### Update Existing Evaluation
@@ -64,6 +66,7 @@ curl -X POST "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs?api-version=
64
66
curl -X POST "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}?api-version=preview" \
65
67
-H "Content-Type: application/json" \
66
68
-H "api-key: $AZURE_OPENAI_API_KEY"
69
+
-H "aoai-evals: preview" \
67
70
```
68
71
69
72
## Evaluation Results
@@ -74,6 +77,7 @@ Once evaluation is complete, you can fetch the evaluation results for the evalua
74
77
curl -X GET "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}?api-version=preview" \
75
78
-H "Content-Type: application/json" \
76
79
-H "api-key: $AZURE_OPENAI_API_KEY"
80
+
-H "aoai-evals: preview" \
77
81
```
78
82
79
83
### Single Evaluation Run Result
@@ -84,6 +88,7 @@ Just like how you can create a single evaluation run under an existing evaluatio
84
88
curl -X GET "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs/{run-id}?api-version=preview" \
85
89
-H "Content-Type: application/json" \
86
90
-H "api-key: $AZURE_OPENAI_API_KEY"
91
+
-H "aoai-evals: preview" \
87
92
```
88
93
89
94
In addition to the parameters in the examples above, you can optionally add these parameters for more specific drill-downs into the evaluation results:
@@ -107,6 +112,7 @@ To see the list of all evaluation jobs that were created:
107
112
curl -X GET "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs?api-version=preview" \
108
113
-H "Content-Type: application/json" \
109
114
-H "api-key: $AZURE_OPENAI_API_KEY"
115
+
-H "aoai-evals: preview" \
110
116
```
111
117
112
118
### Output Details for a Run
@@ -117,6 +123,7 @@ You can view the individual outputs generated from the graders for a single eval
117
123
curl -X GET "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs/{run-id}/output_items?api-version=preview" \
118
124
-H "Content-Type: application/json" \
119
125
-H "api-key: $AZURE_OPENAI_API_KEY"
126
+
-H "aoai-evals: preview" \
120
127
```
121
128
122
129
If you have a particular output result you would like to see, you can specify the output item ID:
@@ -125,4 +132,5 @@ If you have a particular output result you would like to see, you can specify th
125
132
curl -X GET "$AZURE_OPENAI_ENDPOINT/openai/v1/evals/{eval-id}/runs/{run-id}/output_items/{output-item-id}?api-version=preview" \
0 commit comments