You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Working with the GPT-3.5-Turbo and GPT-4 models
16
16
17
-
The following code snippet shows the most basic way to use the GPT-35-Turbo and GPT-4 models with the Chat Completion API. If this is your first time using these models programmatically, we recommend starting with our [GPT-35-Turbo & GPT-4 Quickstart](../chatgpt-quickstart.md).
17
+
The following code snippet shows the most basic way to use the GPT-3.5-Turbo and GPT-4 models with the Chat Completion API. If this is your first time using these models programmatically, we recommend starting with our [GPT-3.5-Turbo & GPT-4 Quickstart](../chatgpt-quickstart.md).
18
+
19
+
# [OpenAI Python 0.28.1](#tab/python)
18
20
19
21
```python
20
22
import os
21
23
import openai
22
24
openai.api_type ="azure"
23
25
openai.api_version ="2023-05-15"
24
-
openai.api_base = os.getenv("OPENAI_API_BASE") # Your Azure OpenAI resource's endpoint value.
25
-
openai.api_key = os.getenv("OPENAI_API_KEY")
26
+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # Your Azure OpenAI resource's endpoint value.
27
+
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
26
28
27
29
response = openai.ChatCompletion.create(
28
-
engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-35-Turbo or GPT-4 model.
30
+
engine="gpt-35-turbo", # The deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 model.
29
31
messages=[
30
32
{"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
31
33
{"role": "user", "content": "Who were the founders of Microsoft?"}
model="gpt-35-turbo", # model = "deployment_name".
86
+
messages=[
87
+
{"role": "system", "content": "Assistant is a large language model trained by OpenAI."},
88
+
{"role": "user", "content": "Who were the founders of Microsoft?"}
89
+
]
90
+
)
91
+
92
+
#print(response)
93
+
print(response.model_dump_json(indent=2))
94
+
print(response.choices[0].message.content)
95
+
```
96
+
97
+
```output
98
+
{
99
+
"id": "chatcmpl-8GHoQAJ3zN2DJYqOFiVysrMQJfe1P",
100
+
"choices": [
101
+
{
102
+
"finish_reason": "stop",
103
+
"index": 0,
104
+
"message": {
105
+
"content": "Microsoft was founded by Bill Gates and Paul Allen. They established the company on April 4, 1975. Bill Gates served as the CEO of Microsoft until 2000 and later as Chairman and Chief Software Architect until his retirement in 2008, while Paul Allen left the company in 1983 but remained on the board of directors until 2000.",
106
+
"role": "assistant",
107
+
"function_call": null
108
+
},
109
+
"content_filter_results": {
110
+
"hate": {
111
+
"filtered": false,
112
+
"severity": "safe"
113
+
},
114
+
"self_harm": {
115
+
"filtered": false,
116
+
"severity": "safe"
117
+
},
118
+
"sexual": {
119
+
"filtered": false,
120
+
"severity": "safe"
121
+
},
122
+
"violence": {
123
+
"filtered": false,
124
+
"severity": "safe"
125
+
}
126
+
}
127
+
}
128
+
],
129
+
"created": 1698892410,
130
+
"model": "gpt-35-turbo",
131
+
"object": "chat.completion",
132
+
"usage": {
133
+
"completion_tokens": 73,
134
+
"prompt_tokens": 29,
135
+
"total_tokens": 102
136
+
},
137
+
"prompt_filter_results": [
138
+
{
139
+
"prompt_index": 0,
140
+
"content_filter_results": {
141
+
"hate": {
142
+
"filtered": false,
143
+
"severity": "safe"
144
+
},
145
+
"self_harm": {
146
+
"filtered": false,
147
+
"severity": "safe"
148
+
},
149
+
"sexual": {
150
+
"filtered": false,
151
+
"severity": "safe"
152
+
},
153
+
"violence": {
154
+
"filtered": false,
155
+
"severity": "safe"
156
+
}
157
+
}
158
+
}
159
+
]
160
+
}
161
+
Microsoft was founded by Bill Gates and Paul Allen. They established the company on April 4, 1975. Bill Gates served as the CEO of Microsoft until 2000 and later as Chairman and Chief Software Architect until his retirement in 2008, while Paul Allen left the company in 1983 but remained on the board of directors until 2000.
162
+
```
163
+
164
+
---
165
+
67
166
> [!NOTE]
68
167
> The following parameters aren't available with the new GPT-35-Turbo and GPT-4 models: `logprobs`, `best_of`, and `echo`. If you set any of these parameters, you'll get an error.
69
168
@@ -205,13 +304,16 @@ The examples so far have shown you the basic mechanics of interacting with the C
205
304
206
305
This means that every time a new question is asked, a running transcript of the conversation so far is sent along with the latest question. Since the model has no memory, you need to send an updated transcript with each new question or the model will lose context of the previous questions and answers.
207
306
208
-
```Python
307
+
308
+
# [OpenAI Python 0.28.1](#tab/python)
309
+
310
+
```python
209
311
import os
210
312
import openai
211
313
openai.api_type ="azure"
212
314
openai.api_version ="2023-05-15"
213
-
openai.api_base = os.getenv("OPENAI_API_BASE") # Your Azure OpenAI resource's endpoint value.
214
-
openai.api_key = os.getenv("OPENAI_API_KEY")
315
+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # Your Azure OpenAI resource's endpoint value.
316
+
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
215
317
216
318
conversation=[{"role": "system", "content": "You are a helpful assistant."}]
When you run the code above you will get a blank console window. Enter your first question in the window and then hit enter. Once the response is returned, you can repeat the process and keep asking questions.
232
363
233
364
## Managing conversations
@@ -241,7 +372,9 @@ It's your responsibility to ensure the prompt and completion falls within the to
241
372
242
373
The following code sample shows a simple chat loop example with a technique for handling a 4096 token count using OpenAI's tiktoken library.
243
374
244
-
The code requires tiktoken `0.3.0`. If you have an older version run `pip install tiktoken --upgrade`.
375
+
The code uses tiktoken `0.5.1`. If you have an older version run `pip install tiktoken --upgrade`.
376
+
377
+
# [OpenAI Python 0.28.1](#tab/python)
245
378
246
379
```python
247
380
import tiktoken
@@ -250,8 +383,8 @@ import os
250
383
251
384
openai.api_type ="azure"
252
385
openai.api_version ="2023-05-15"
253
-
openai.api_base = os.getenv("OPENAI_API_BASE") # Your Azure OpenAI resource's endpoint value.
254
-
openai.api_key = os.getenv("OPENAI_API_KEY")
386
+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # Your Azure OpenAI resource's endpoint value.
387
+
openai.api_key = os.getenv("AZURE_OPENAI_KEY")
255
388
256
389
system_message = {"role": "system", "content": "You are a helpful assistant."}
f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens."""
503
+
)
504
+
num_tokens =0
505
+
for message in messages:
506
+
num_tokens += tokens_per_message
507
+
for key, value in message.items():
508
+
num_tokens +=len(encoding.encode(value))
509
+
if key =="name":
510
+
num_tokens += tokens_per_name
511
+
num_tokens +=3# every reply is primed with <|start|>assistant<|message|>
In this example, once the token count is reached, the oldest messages in the conversation transcript will be removed. `del` is used instead of `pop()` for efficiency, and we start at index 1 so as to always preserve the system message and only remove user/assistant messages. Over time, this method of managing the conversation can cause the conversation quality to degrade as the model will gradually lose context of the earlier portions of the conversation.
323
537
324
538
An alternative approach is to limit the conversation duration to the max token length or a certain number of turns. Once the max token limit is reached and the model would lose context if you were to allow the conversation to continue, you can prompt the user that they need to begin a new conversation and clear the messages list to start a brand new conversation with the full token limit available.
0 commit comments