You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Project API keys listed successfully.
@@ -9626,7 +9635,12 @@ components:
9626
9635
- type: string
9627
9636
enum:
9628
9637
[
9638
+
"o1-preview",
9639
+
"o1-preview-2024-09-12",
9640
+
"o1-mini",
9641
+
"o1-mini-2024-09-12",
9629
9642
"gpt-4o",
9643
+
"gpt-4o-2024-08-06",
9630
9644
"gpt-4o-2024-05-13",
9631
9645
"gpt-4o-2024-08-06",
9632
9646
"chatgpt-4o-latest",
@@ -9684,11 +9698,18 @@ components:
9684
9698
nullable: true
9685
9699
max_tokens:
9686
9700
description: |
9687
-
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.
9701
+
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. This value can be used to control [costs](https://openai.com/api/pricing/) for text generated via API.
9688
9702
9689
-
The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
9703
+
This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with [o1 series models](/docs/guides/reasoning).
9704
+
type: integer
9705
+
nullable: true
9706
+
deprecated: true
9707
+
max_completion_tokens:
9708
+
description: |
9709
+
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and [reasoning tokens](/docs/guides/reasoning).
9690
9710
type: integer
9691
9711
nullable: true
9712
+
9692
9713
n:
9693
9714
type: integer
9694
9715
minimum: 1
@@ -9708,9 +9729,9 @@ components:
9708
9729
description: |
9709
9730
An object specifying the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4o mini](/docs/models/gpt-4o-mini), [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
9710
9731
9711
-
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
9732
+
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
9712
9733
9713
-
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
9734
+
Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.
9714
9735
9715
9736
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
9716
9737
oneOf:
@@ -9732,7 +9753,8 @@ components:
9732
9753
service_tier:
9733
9754
description: |
9734
9755
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
9735
-
- If set to 'auto', the system will utilize scale tier credits until they are exhausted.
9756
+
- If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
9757
+
- If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
9736
9758
- If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
9737
9759
- When not set, the default behavior is 'auto'.
9738
9760
@@ -10621,12 +10643,12 @@ components:
10621
10643
default: auto
10622
10644
suffix:
10623
10645
description: |
10624
-
A string of up to 18 characters that will be added to your fine-tuned model name.
10646
+
A string of up to 64 characters that will be added to your fine-tuned model name.
10625
10647
10626
10648
For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.
10627
10649
type: string
10628
10650
minLength: 1
10629
-
maxLength: 40
10651
+
maxLength: 64
10630
10652
default: null
10631
10653
nullable: true
10632
10654
validation_file:
@@ -11730,6 +11752,13 @@ components:
11730
11752
total_tokens:
11731
11753
type: integer
11732
11754
description: Total number of tokens used in the request (prompt + completion).
11755
+
completion_tokens_details:
11756
+
type: object
11757
+
description: Breakdown of tokens used in a completion.
11758
+
properties:
11759
+
reasoning_tokens:
11760
+
type: integer
11761
+
description: Tokens generated by the model for reasoning.
11733
11762
required:
11734
11763
- prompt_tokens
11735
11764
- completion_tokens
@@ -11777,9 +11806,9 @@ components:
11777
11806
description: |
11778
11807
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
11779
11808
11780
-
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
11809
+
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
11781
11810
11782
-
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
11811
+
Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.
11783
11812
11784
11813
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
11785
11814
oneOf:
@@ -12269,7 +12298,7 @@ components:
12269
12298
title: File search tool call ranking options
12270
12299
type: object
12271
12300
description: |
12272
-
The ranking options for the file search.
12301
+
The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.
12273
12302
12274
12303
See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
12275
12304
properties:
@@ -12282,6 +12311,8 @@ components:
12282
12311
description: The score threshold for the file search. All values must be a floating point number between 0 and 1.
12283
12312
minimum: 0
12284
12313
maximum: 1
12314
+
required:
12315
+
- score_threshold
12285
12316
12286
12317
AssistantToolsFileSearchTypeOnly:
12287
12318
type: object
@@ -16102,6 +16133,12 @@ components:
16102
16133
type: string
16103
16134
enum: [active, archived]
16104
16135
description: "`active` or `archived`"
16136
+
app_use_case:
16137
+
type: string
16138
+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16139
+
business_website:
16140
+
type: string
16141
+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16105
16142
required:
16106
16143
- id
16107
16144
- object
@@ -16117,7 +16154,9 @@ components:
16117
16154
"name": "Project example",
16118
16155
"created_at": 1711471533,
16119
16156
"archived_at": null,
16120
-
"status": "active"
16157
+
"status": "active",
16158
+
"app_use_case": "Your project use case here",
16159
+
"business_website": "https://example.com"
16121
16160
}
16122
16161
16123
16162
ProjectListResponse:
@@ -16149,6 +16188,12 @@ components:
16149
16188
name:
16150
16189
type: string
16151
16190
description: The friendly name of the project, this name appears in reports.
16191
+
app_use_case:
16192
+
type: string
16193
+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16194
+
business_website:
16195
+
type: string
16196
+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16152
16197
required:
16153
16198
- name
16154
16199
@@ -16158,6 +16203,12 @@ components:
16158
16203
name:
16159
16204
type: string
16160
16205
description: The updated name of the project, this name appears in reports.
16206
+
app_use_case:
16207
+
type: string
16208
+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16209
+
business_website:
16210
+
type: string
16211
+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
0 commit comments