@@ -66,7 +66,7 @@ def create(
66
66
67
67
Args:
68
68
messages: A list of messages comprising the conversation so far.
69
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_format_inputs_to_ChatGPT_models.ipynb ).
69
+ [Example Python code](https://cookbook.openai. com/examples/how_to_format_inputs_to_chatgpt_models ).
70
70
71
71
model: ID of the model to use. See the
72
72
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -78,12 +78,12 @@ def create(
78
78
79
79
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
80
80
81
- function_call: Controls how the model responds to function calls. ` none` means the model does
82
- not call a function, and responds to the end-user. ` auto` means the model can
83
- pick between an end-user or calling a function. Specifying a particular function
84
- via `{"name": "my_function"}` forces the model to call that function. `none` is
85
- the default when no functions are present. ` auto` is the default if functions
86
- are present.
81
+ function_call: Controls how the model calls functions. " none" means the model will not call a
82
+ function and instead generates a message. " auto" means the model can pick
83
+ between generating a message or calling a function. Specifying a particular
84
+ function via `{"name": "my_function"}` forces the model to call that function.
85
+ "none" is the default when no functions are present. " auto" is the default if
86
+ functions are present.
87
87
88
88
functions: A list of functions the model may generate JSON inputs for.
89
89
@@ -100,7 +100,7 @@ def create(
100
100
101
101
The total length of input tokens and generated tokens is limited by the model's
102
102
context length.
103
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_count_tokens_with_tiktoken.ipynb )
103
+ [Example Python code](https://cookbook.openai. com/examples/how_to_count_tokens_with_tiktoken )
104
104
for counting tokens.
105
105
106
106
n: How many chat completion choices to generate for each input message.
@@ -118,7 +118,7 @@ def create(
118
118
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
119
119
as they become available, with the stream terminated by a `data: [DONE]`
120
120
message.
121
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_stream_completions.ipynb ).
121
+ [Example Python code](https://cookbook.openai. com/examples/how_to_stream_completions ).
122
122
123
123
temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
124
124
make the output more random, while lower values like 0.2 will make it more
@@ -191,7 +191,7 @@ def create(
191
191
192
192
Args:
193
193
messages: A list of messages comprising the conversation so far.
194
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_format_inputs_to_ChatGPT_models.ipynb ).
194
+ [Example Python code](https://cookbook.openai. com/examples/how_to_format_inputs_to_chatgpt_models ).
195
195
196
196
model: ID of the model to use. See the
197
197
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -202,20 +202,20 @@ def create(
202
202
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
203
203
as they become available, with the stream terminated by a `data: [DONE]`
204
204
message.
205
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_stream_completions.ipynb ).
205
+ [Example Python code](https://cookbook.openai. com/examples/how_to_stream_completions ).
206
206
207
207
frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their
208
208
existing frequency in the text so far, decreasing the model's likelihood to
209
209
repeat the same line verbatim.
210
210
211
211
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
212
212
213
- function_call: Controls how the model responds to function calls. ` none` means the model does
214
- not call a function, and responds to the end-user. ` auto` means the model can
215
- pick between an end-user or calling a function. Specifying a particular function
216
- via `{"name": "my_function"}` forces the model to call that function. `none` is
217
- the default when no functions are present. ` auto` is the default if functions
218
- are present.
213
+ function_call: Controls how the model calls functions. " none" means the model will not call a
214
+ function and instead generates a message. " auto" means the model can pick
215
+ between generating a message or calling a function. Specifying a particular
216
+ function via `{"name": "my_function"}` forces the model to call that function.
217
+ "none" is the default when no functions are present. " auto" is the default if
218
+ functions are present.
219
219
220
220
functions: A list of functions the model may generate JSON inputs for.
221
221
@@ -232,7 +232,7 @@ def create(
232
232
233
233
The total length of input tokens and generated tokens is limited by the model's
234
234
context length.
235
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_count_tokens_with_tiktoken.ipynb )
235
+ [Example Python code](https://cookbook.openai. com/examples/how_to_count_tokens_with_tiktoken )
236
236
for counting tokens.
237
237
238
238
n: How many chat completion choices to generate for each input message.
@@ -387,7 +387,7 @@ async def create(
387
387
388
388
Args:
389
389
messages: A list of messages comprising the conversation so far.
390
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_format_inputs_to_ChatGPT_models.ipynb ).
390
+ [Example Python code](https://cookbook.openai. com/examples/how_to_format_inputs_to_chatgpt_models ).
391
391
392
392
model: ID of the model to use. See the
393
393
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -399,12 +399,12 @@ async def create(
399
399
400
400
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
401
401
402
- function_call: Controls how the model responds to function calls. ` none` means the model does
403
- not call a function, and responds to the end-user. ` auto` means the model can
404
- pick between an end-user or calling a function. Specifying a particular function
405
- via `{"name": "my_function"}` forces the model to call that function. `none` is
406
- the default when no functions are present. ` auto` is the default if functions
407
- are present.
402
+ function_call: Controls how the model calls functions. " none" means the model will not call a
403
+ function and instead generates a message. " auto" means the model can pick
404
+ between generating a message or calling a function. Specifying a particular
405
+ function via `{"name": "my_function"}` forces the model to call that function.
406
+ "none" is the default when no functions are present. " auto" is the default if
407
+ functions are present.
408
408
409
409
functions: A list of functions the model may generate JSON inputs for.
410
410
@@ -421,7 +421,7 @@ async def create(
421
421
422
422
The total length of input tokens and generated tokens is limited by the model's
423
423
context length.
424
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_count_tokens_with_tiktoken.ipynb )
424
+ [Example Python code](https://cookbook.openai. com/examples/how_to_count_tokens_with_tiktoken )
425
425
for counting tokens.
426
426
427
427
n: How many chat completion choices to generate for each input message.
@@ -439,7 +439,7 @@ async def create(
439
439
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
440
440
as they become available, with the stream terminated by a `data: [DONE]`
441
441
message.
442
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_stream_completions.ipynb ).
442
+ [Example Python code](https://cookbook.openai. com/examples/how_to_stream_completions ).
443
443
444
444
temperature: What sampling temperature to use, between 0 and 2. Higher values like 0.8 will
445
445
make the output more random, while lower values like 0.2 will make it more
@@ -512,7 +512,7 @@ async def create(
512
512
513
513
Args:
514
514
messages: A list of messages comprising the conversation so far.
515
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_format_inputs_to_ChatGPT_models.ipynb ).
515
+ [Example Python code](https://cookbook.openai. com/examples/how_to_format_inputs_to_chatgpt_models ).
516
516
517
517
model: ID of the model to use. See the
518
518
[model endpoint compatibility](https://platform.openai.com/docs/models/model-endpoint-compatibility)
@@ -523,20 +523,20 @@ async def create(
523
523
[server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format)
524
524
as they become available, with the stream terminated by a `data: [DONE]`
525
525
message.
526
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_stream_completions.ipynb ).
526
+ [Example Python code](https://cookbook.openai. com/examples/how_to_stream_completions ).
527
527
528
528
frequency_penalty: Number between -2.0 and 2.0. Positive values penalize new tokens based on their
529
529
existing frequency in the text so far, decreasing the model's likelihood to
530
530
repeat the same line verbatim.
531
531
532
532
[See more information about frequency and presence penalties.](https://platform.openai.com/docs/guides/gpt/parameter-details)
533
533
534
- function_call: Controls how the model responds to function calls. ` none` means the model does
535
- not call a function, and responds to the end-user. ` auto` means the model can
536
- pick between an end-user or calling a function. Specifying a particular function
537
- via `{"name": "my_function"}` forces the model to call that function. `none` is
538
- the default when no functions are present. ` auto` is the default if functions
539
- are present.
534
+ function_call: Controls how the model calls functions. " none" means the model will not call a
535
+ function and instead generates a message. " auto" means the model can pick
536
+ between generating a message or calling a function. Specifying a particular
537
+ function via `{"name": "my_function"}` forces the model to call that function.
538
+ "none" is the default when no functions are present. " auto" is the default if
539
+ functions are present.
540
540
541
541
functions: A list of functions the model may generate JSON inputs for.
542
542
@@ -553,7 +553,7 @@ async def create(
553
553
554
554
The total length of input tokens and generated tokens is limited by the model's
555
555
context length.
556
- [Example Python code](https://github. com/openai/openai-cookbook/blob/main/ examples/How_to_count_tokens_with_tiktoken.ipynb )
556
+ [Example Python code](https://cookbook.openai. com/examples/how_to_count_tokens_with_tiktoken )
557
557
for counting tokens.
558
558
559
559
n: How many chat completion choices to generate for each input message.
0 commit comments