You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: openapi.yaml
+29-17Lines changed: 29 additions & 17 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@ openapi: 3.0.0
2
2
info:
3
3
title: OpenAI API
4
4
description: APIs for sampling from and fine-tuning language models
5
-
version: '1.0.2'
5
+
version: '1.0.3'
6
6
servers:
7
7
- url: https://api.openai.com/v1
8
8
tags:
@@ -1340,12 +1340,10 @@ components:
1340
1340
type: string
1341
1341
default: ''
1342
1342
example: "This is a test."
1343
-
nullable: false
1344
1343
- type: array
1345
1344
minItems: 1
1346
1345
items:
1347
1346
type: integer
1348
-
nullable: false
1349
1347
example: "[1212, 318, 257, 1332, 13]"
1350
1348
- type: array
1351
1349
minItems: 1
@@ -1354,7 +1352,6 @@ components:
1354
1352
minItems: 1
1355
1353
items:
1356
1354
type: integer
1357
-
nullable: false
1358
1355
example: "[[1212, 318, 257, 1332, 13]]"
1359
1356
max_tokens:
1360
1357
type: integer
@@ -1438,7 +1435,6 @@ components:
1438
1435
items:
1439
1436
type: string
1440
1437
example: '["\n"]'
1441
-
nullable: false
1442
1438
presence_penalty:
1443
1439
type: number
1444
1440
default: 0
@@ -1482,6 +1478,11 @@ components:
1482
1478
Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
1483
1479
1484
1480
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
1481
+
user: &end_user_param_configuration
1482
+
type: string
1483
+
example: user-1234
1484
+
description: |
1485
+
A unique identifier representing your end-user, which will help OpenAI to monitor and detect abuse.
1485
1486
1486
1487
CreateCompletionResponse:
1487
1488
type: object
@@ -1538,6 +1539,11 @@ components:
1538
1539
CreateSearchRequest:
1539
1540
type: object
1540
1541
properties:
1542
+
query:
1543
+
description: Query to search against the documents.
1544
+
type: string
1545
+
example: "the president"
1546
+
minLength: 1
1541
1547
documents:
1542
1548
description: |
1543
1549
Up to 200 documents to search over, provided as a list of strings.
@@ -1559,12 +1565,6 @@ components:
1559
1565
You should specify either `documents` or a `file`, but not both.
1560
1566
type: string
1561
1567
nullable: true
1562
-
query:
1563
-
description: Query to search against the documents.
1564
-
type: string
1565
-
nullable: false
1566
-
example: "the president"
1567
-
minLength: 1
1568
1568
max_rerank:
1569
1569
description: |
1570
1570
The maximum number of documents to be re-ranked and returned by search.
@@ -1582,6 +1582,9 @@ components:
1582
1582
type: boolean
1583
1583
default: false
1584
1584
nullable: true
1585
+
user: *end_user_param_configuration
1586
+
required:
1587
+
- query
1585
1588
1586
1589
CreateSearchResponse:
1587
1590
type: object
@@ -1734,14 +1737,12 @@ components:
1734
1737
- type: string
1735
1738
default: <|endoftext|>
1736
1739
example: "\n"
1737
-
nullable: true
1738
1740
- type: array
1739
1741
minItems: 1
1740
1742
maxItems: 4
1741
1743
items:
1742
1744
type: string
1743
1745
example: '["\n"]'
1744
-
nullable: false
1745
1746
nullable: true
1746
1747
n:
1747
1748
description: How many answers to generate for each question.
@@ -1763,6 +1764,12 @@ components:
1763
1764
items: {}
1764
1765
nullable: true
1765
1766
default: []
1767
+
user: *end_user_param_configuration
1768
+
required:
1769
+
- model
1770
+
- question
1771
+
- examples
1772
+
- examples_context
1766
1773
1767
1774
CreateAnswerResponse:
1768
1775
type: object
@@ -1857,6 +1864,10 @@ components:
1857
1864
return_prompt: *return_prompt_configuration
1858
1865
return_metadata: *return_metadata_configuration
1859
1866
expand: *expand_configuration
1867
+
user: *end_user_param_configuration
1868
+
required:
1869
+
- model
1870
+
- query
1860
1871
1861
1872
CreateClassificationResponse:
1862
1873
type: object
@@ -1898,7 +1909,6 @@ components:
1898
1909
1899
1910
See the [fine-tuning guide](/docs/guides/fine-tuning/creating-training-data) for more details.
0 commit comments