You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/batch.md
-4Lines changed: 0 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,10 +166,6 @@ The `2024-10-01-preview` REST API adds two new response headers:
166
166
*`deployment-enqueued-tokens` - A approximate token count for your jsonl file calculated immediately after the batch request is submitted. This value represents an estimate based on the number of characters and is not the true token count.
167
167
*`deployment-maximum-enqueued-tokens` The total available enqueued tokens available for this global batch model deployment.
168
168
169
-
**Example:**
170
-
171
-
172
-
173
169
These response headers are only available when making a POST request to begin batch processing of a file with the REST API. The language specific client libraries do not currently return these new response headers.
174
170
175
171
### What happens if the API doesn't complete my request within the 24 hour time frame?
0 commit comments