Skip to content

Commit f990f1d

Browse files
committed
f
1 parent 440c171 commit f990f1d

File tree

1 file changed

+71
-20
lines changed

1 file changed

+71
-20
lines changed

openapi.yaml

Lines changed: 71 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,9 @@ paths:
179179
{"type": "text", "text": "What's in this image?"},
180180
{
181181
"type": "image_url",
182-
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
182+
"image_url": {
183+
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
184+
}
183185
},
184186
],
185187
}
@@ -203,9 +205,10 @@ paths:
203205
{ type: "text", text: "What's in this image?" },
204206
{
205207
type: "image_url",
206-
image_url:
207-
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
208-
},
208+
image_url: {
209+
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
210+
},
211+
}
209212
],
210213
},
211214
],
@@ -2768,7 +2771,7 @@ paths:
27682771
response: &moderation_example |
27692772
{
27702773
"id": "modr-XXXXX",
2771-
"model": "text-moderation-005",
2774+
"model": "text-moderation-007",
27722775
"results": [
27732776
{
27742777
"flagged": true,
@@ -7857,7 +7860,9 @@ paths:
78577860
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
78587861
-H "Content-Type: application/json" \
78597862
-d '{
7860-
"name": "Project ABC"
7863+
"name": "Project ABC",
7864+
"app_use_case": "Your project use case here",
7865+
"business_website": "https://example.com"
78617866
}'
78627867
response:
78637868
content: |
@@ -7867,7 +7872,9 @@ paths:
78677872
"name": "Project ABC",
78687873
"created_at": 1711471533,
78697874
"archived_at": null,
7870-
"status": "active"
7875+
"status": "active",
7876+
"app_use_case": "Your project use case here",
7877+
"business_website": "https://example.com"
78717878
}
78727879

78737880
/organization/projects/{project_id}:
@@ -7948,7 +7955,9 @@ paths:
79487955
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
79497956
-H "Content-Type: application/json" \
79507957
-d '{
7951-
"name": "Project DEF"
7958+
"name": "Project DEF",
7959+
"app_use_case": "Your project use case here",
7960+
"business_website": "https://example.com"
79527961
}'
79537962

79547963
/organization/projects/{project_id}/archive:
@@ -8517,7 +8526,7 @@ paths:
85178526
description: *pagination_after_param_description
85188527
required: false
85198528
schema:
8520-
type: string
8529+
type: string
85218530
responses:
85228531
"200":
85238532
description: Project API keys listed successfully.
@@ -9626,7 +9635,12 @@ components:
96269635
- type: string
96279636
enum:
96289637
[
9638+
"o1-preview",
9639+
"o1-preview-2024-09-12",
9640+
"o1-mini",
9641+
"o1-mini-2024-09-12",
96299642
"gpt-4o",
9643+
"gpt-4o-2024-08-06",
96309644
"gpt-4o-2024-05-13",
96319645
"gpt-4o-2024-08-06",
96329646
"chatgpt-4o-latest",
@@ -9684,11 +9698,18 @@ components:
96849698
nullable: true
96859699
max_tokens:
96869700
description: |
9687-
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.
9701+
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion. This value can be used to control [costs](https://openai.com/api/pricing/) for text generated via API.
96889702

9689-
The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
9703+
This value is now deprecated in favor of `max_completion_tokens`, and is not compatible with [o1 series models](/docs/guides/reasoning).
9704+
type: integer
9705+
nullable: true
9706+
deprecated: true
9707+
max_completion_tokens:
9708+
description: |
9709+
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and [reasoning tokens](/docs/guides/reasoning).
96909710
type: integer
96919711
nullable: true
9712+
96929713
n:
96939714
type: integer
96949715
minimum: 1
@@ -9708,9 +9729,9 @@ components:
97089729
description: |
97099730
An object specifying the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4o mini](/docs/models/gpt-4o-mini), [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
97109731

9711-
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
9732+
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
97129733

9713-
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
9734+
Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.
97149735

97159736
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
97169737
oneOf:
@@ -9732,7 +9753,8 @@ components:
97329753
service_tier:
97339754
description: |
97349755
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
9735-
- If set to 'auto', the system will utilize scale tier credits until they are exhausted.
9756+
- If set to 'auto', and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
9757+
- If set to 'auto', and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
97369758
- If set to 'default', the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.
97379759
- When not set, the default behavior is 'auto'.
97389760

@@ -10621,12 +10643,12 @@ components:
1062110643
default: auto
1062210644
suffix:
1062310645
description: |
10624-
A string of up to 18 characters that will be added to your fine-tuned model name.
10646+
A string of up to 64 characters that will be added to your fine-tuned model name.
1062510647

1062610648
For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.
1062710649
type: string
1062810650
minLength: 1
10629-
maxLength: 40
10651+
maxLength: 64
1063010652
default: null
1063110653
nullable: true
1063210654
validation_file:
@@ -11730,6 +11752,13 @@ components:
1173011752
total_tokens:
1173111753
type: integer
1173211754
description: Total number of tokens used in the request (prompt + completion).
11755+
completion_tokens_details:
11756+
type: object
11757+
description: Breakdown of tokens used in a completion.
11758+
properties:
11759+
reasoning_tokens:
11760+
type: integer
11761+
description: Tokens generated by the model for reasoning.
1173311762
required:
1173411763
- prompt_tokens
1173511764
- completion_tokens
@@ -11777,9 +11806,9 @@ components:
1177711806
description: |
1177811807
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
1177911808

11780-
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which guarantees the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
11809+
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).
1178111810

11782-
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
11811+
Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.
1178311812

1178411813
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
1178511814
oneOf:
@@ -12269,7 +12298,7 @@ components:
1226912298
title: File search tool call ranking options
1227012299
type: object
1227112300
description: |
12272-
The ranking options for the file search.
12301+
The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.
1227312302

1227412303
See the [file search tool documentation](/docs/assistants/tools/file-search/customizing-file-search-settings) for more information.
1227512304
properties:
@@ -12282,6 +12311,8 @@ components:
1228212311
description: The score threshold for the file search. All values must be a floating point number between 0 and 1.
1228312312
minimum: 0
1228412313
maximum: 1
12314+
required:
12315+
- score_threshold
1228512316

1228612317
AssistantToolsFileSearchTypeOnly:
1228712318
type: object
@@ -16102,6 +16133,12 @@ components:
1610216133
type: string
1610316134
enum: [active, archived]
1610416135
description: "`active` or `archived`"
16136+
app_use_case:
16137+
type: string
16138+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16139+
business_website:
16140+
type: string
16141+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
1610516142
required:
1610616143
- id
1610716144
- object
@@ -16117,7 +16154,9 @@ components:
1611716154
"name": "Project example",
1611816155
"created_at": 1711471533,
1611916156
"archived_at": null,
16120-
"status": "active"
16157+
"status": "active",
16158+
"app_use_case": "Your project use case here",
16159+
"business_website": "https://example.com"
1612116160
}
1612216161

1612316162
ProjectListResponse:
@@ -16149,6 +16188,12 @@ components:
1614916188
name:
1615016189
type: string
1615116190
description: The friendly name of the project, this name appears in reports.
16191+
app_use_case:
16192+
type: string
16193+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16194+
business_website:
16195+
type: string
16196+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
1615216197
required:
1615316198
- name
1615416199

@@ -16158,6 +16203,12 @@ components:
1615816203
name:
1615916204
type: string
1616016205
description: The updated name of the project, this name appears in reports.
16206+
app_use_case:
16207+
type: string
16208+
description: A description of your business, project, or use case. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
16209+
business_website:
16210+
type: string
16211+
description: Your business URL, or if you don't have one yet, a URL to your LinkedIn or other social media. [Why we need this information](https://help.openai.com/en/articles/9824607-api-platform-verifications).
1616116212
required:
1616216213
- name
1616316214

0 commit comments

Comments
 (0)