Skip to content

Commit 645766c

Browse files
stainless-botRobertCraigie
authored andcommitted
feat(api): remove content_filter stop_reason and update documentation
1 parent 7fefa80 commit 645766c

File tree

9 files changed

+90
-86
lines changed

9 files changed

+90
-86
lines changed

src/openai/resources/chat/completions.py

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def create(
6666
6767
Args:
6868
messages: A list of messages comprising the conversation so far.
69-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
69+
[Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
7070
7171
model: ID of the model to use. See the
7272
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -78,12 +78,12 @@ def create(
7878
7979
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
8080
81-
function_call: Controls how the model responds to function calls. `none` means the model does
82-
not call a function, and responds to the end-user. `auto` means the model can
83-
pick between an end-user or calling a function. Specifying a particular function
84-
via `{"name": "my_function"}` forces the model to call that function. `none` is
85-
the default when no functions are present. `auto` is the default if functions
86-
are present.
81+
function_call: Controls how the model calls functions. "none" means the model will not call a
82+
function and instead generates a message. "auto" means the model can pick
83+
between generating a message or calling a function. Specifying a particular
84+
function via `{"name": "my_function"}` forces the model to call that function.
85+
"none" is the default when no functions are present. "auto" is the default if
86+
functions are present.
8787
8888
functions: A list of functions the model may generate JSON inputs for.
8989
@@ -100,7 +100,7 @@ def create(
100100
101101
The total length of input tokens and generated tokens is limited by the model's
102102
context length.
103-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
103+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
104104
for counting tokens.
105105
106106
n: How many chat completion choices to generate for each input message.
@@ -118,7 +118,7 @@ def create(
118118
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
119119
as they become available, with the stream terminated by a `data: [DONE]`
120120
message.
121-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
121+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
122122
123123
temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
124124
make the output more random, while lower values like 0.2 will make it more
@@ -191,7 +191,7 @@ def create(
191191
192192
Args:
193193
messages: A list of messages comprising the conversation so far.
194-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
194+
[Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
195195
196196
model: ID of the model to use. See the
197197
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -202,20 +202,20 @@ def create(
202202
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
203203
as they become available, with the stream terminated by a `data: [DONE]`
204204
message.
205-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
205+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
206206
207207
frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their
208208
existing frequency in the text so far, decreasing the model's likelihood to
209209
repeat the same line verbatim.
210210
211211
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
212212
213-
function_call: Controls how the model responds to function calls. `none` means the model does
214-
not call a function, and responds to the end-user. `auto` means the model can
215-
pick between an end-user or calling a function. Specifying a particular function
216-
via `{"name": "my_function"}` forces the model to call that function. `none` is
217-
the default when no functions are present. `auto` is the default if functions
218-
are present.
213+
function_call: Controls how the model calls functions. "none" means the model will not call a
214+
function and instead generates a message. "auto" means the model can pick
215+
between generating a message or calling a function. Specifying a particular
216+
function via `{"name": "my_function"}` forces the model to call that function.
217+
"none" is the default when no functions are present. "auto" is the default if
218+
functions are present.
219219
220220
functions: A list of functions the model may generate JSON inputs for.
221221
@@ -232,7 +232,7 @@ def create(
232232
233233
The total length of input tokens and generated tokens is limited by the model's
234234
context length.
235-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
235+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
236236
for counting tokens.
237237
238238
n: How many chat completion choices to generate for each input message.
@@ -387,7 +387,7 @@ async def create(
387387
388388
Args:
389389
messages: A list of messages comprising the conversation so far.
390-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
390+
[Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
391391
392392
model: ID of the model to use. See the
393393
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -399,12 +399,12 @@ async def create(
399399
400400
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
401401
402-
function_call: Controls how the model responds to function calls. `none` means the model does
403-
not call a function, and responds to the end-user. `auto` means the model can
404-
pick between an end-user or calling a function. Specifying a particular function
405-
via `{"name": "my_function"}` forces the model to call that function. `none` is
406-
the default when no functions are present. `auto` is the default if functions
407-
are present.
402+
function_call: Controls how the model calls functions. "none" means the model will not call a
403+
function and instead generates a message. "auto" means the model can pick
404+
between generating a message or calling a function. Specifying a particular
405+
function via `{"name": "my_function"}` forces the model to call that function.
406+
"none" is the default when no functions are present. "auto" is the default if
407+
functions are present.
408408
409409
functions: A list of functions the model may generate JSON inputs for.
410410
@@ -421,7 +421,7 @@ async def create(
421421
422422
The total length of input tokens and generated tokens is limited by the model's
423423
context length.
424-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
424+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
425425
for counting tokens.
426426
427427
n: How many chat completion choices to generate for each input message.
@@ -439,7 +439,7 @@ async def create(
439439
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
440440
as they become available, with the stream terminated by a `data: [DONE]`
441441
message.
442-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
442+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
443443
444444
temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
445445
make the output more random, while lower values like 0.2 will make it more
@@ -512,7 +512,7 @@ async def create(
512512
513513
Args:
514514
messages: A list of messages comprising the conversation so far.
515-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
515+
[Example Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
516516
517517
model: ID of the model to use. See the
518518
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -523,20 +523,20 @@ async def create(
523523
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
524524
as they become available, with the stream terminated by a `data: [DONE]`
525525
message.
526-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
526+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
527527
528528
frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their
529529
existing frequency in the text so far, decreasing the model's likelihood to
530530
repeat the same line verbatim.
531531
532532
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
533533
534-
function_call: Controls how the model responds to function calls. `none` means the model does
535-
not call a function, and responds to the end-user. `auto` means the model can
536-
pick between an end-user or calling a function. Specifying a particular function
537-
via `{"name": "my_function"}` forces the model to call that function. `none` is
538-
the default when no functions are present. `auto` is the default if functions
539-
are present.
534+
function_call: Controls how the model calls functions. "none" means the model will not call a
535+
function and instead generates a message. "auto" means the model can pick
536+
between generating a message or calling a function. Specifying a particular
537+
function via `{"name": "my_function"}` forces the model to call that function.
538+
"none" is the default when no functions are present. "auto" is the default if
539+
functions are present.
540540
541541
functions: A list of functions the model may generate JSON inputs for.
542542
@@ -553,7 +553,7 @@ async def create(
553553
554554
The total length of input tokens and generated tokens is limited by the model's
555555
context length.
556-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
556+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
557557
for counting tokens.
558558
559559
n: How many chat completion choices to generate for each input message.

src/openai/resources/completions.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ def create(
117117
118118
The token count of your prompt plus `max_tokens` cannot exceed the model's
119119
context length.
120-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
120+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
121121
for counting tokens.
122122
123123
n: How many completions to generate for each prompt.
@@ -140,7 +140,7 @@ def create(
140140
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
141141
as they become available, with the stream terminated by a `data: [DONE]`
142142
message.
143-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
143+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
144144
145145
suffix: The suffix that comes after a completion of inserted text.
146146
@@ -233,7 +233,7 @@ def create(
233233
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
234234
as they become available, with the stream terminated by a `data: [DONE]`
235235
message.
236-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
236+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
237237
238238
best_of: Generates `best_of` completions server-side and returns the "best" (the one with
239239
the highest log probability per token). Results cannot be streamed.
@@ -278,7 +278,7 @@ def create(
278278
279279
The token count of your prompt plus `max_tokens` cannot exceed the model's
280280
context length.
281-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
281+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
282282
for counting tokens.
283283
284284
n: How many completions to generate for each prompt.
@@ -499,7 +499,7 @@ async def create(
499499
500500
The token count of your prompt plus `max_tokens` cannot exceed the model's
501501
context length.
502-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
502+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
503503
for counting tokens.
504504
505505
n: How many completions to generate for each prompt.
@@ -522,7 +522,7 @@ async def create(
522522
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
523523
as they become available, with the stream terminated by a `data: [DONE]`
524524
message.
525-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
525+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
526526
527527
suffix: The suffix that comes after a completion of inserted text.
528528
@@ -615,7 +615,7 @@ async def create(
615615
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
616616
as they become available, with the stream terminated by a `data: [DONE]`
617617
message.
618-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb).
618+
[Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
619619
620620
best_of: Generates `best_of` completions server-side and returns the "best" (the one with
621621
the highest log probability per token). Results cannot be streamed.
@@ -660,7 +660,7 @@ async def create(
660660
661661
The token count of your prompt plus `max_tokens` cannot exceed the model's
662662
context length.
663-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
663+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
664664
for counting tokens.
665665
666666
n: How many completions to generate for each prompt.

src/openai/resources/embeddings.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ def create(
3636
inputs in a single request, pass an array of strings or array of token arrays.
3737
Each input must not exceed the max input tokens for the model (8191 tokens for
3838
`text-embedding-ada-002`) and cannot be an empty string.
39-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
39+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
4040
for counting tokens.
4141
4242
model: ID of the model to use. You can use the
@@ -96,7 +96,7 @@ async def create(
9696
inputs in a single request, pass an array of strings or array of token arrays.
9797
Each input must not exceed the max input tokens for the model (8191 tokens for
9898
`text-embedding-ada-002`) and cannot be an empty string.
99-
[Example Python code](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)
99+
[Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken)
100100
for counting tokens.
101101
102102
model: ID of the model to use. You can use the

0 commit comments

Comments
 (0)