You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/how-to/chatgpt.md
+73-32Lines changed: 73 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,29 +1,26 @@
1
1
---
2
-
title: How to work with the ChatGPT model (preview)
2
+
title: How to work with the Chat Markup Language (preview)
3
3
titleSuffix: Azure OpenAI
4
-
description: Learn how to work with the ChatGPT model (preview)
4
+
description: Learn how to work with Chat Markup Language (preview)
5
5
author: dereklegenzoff
6
6
ms.author: delegenz
7
7
ms.service: cognitive-services
8
8
ms.topic: conceptual
9
-
ms.date: 03/01/2023
9
+
ms.date: 03/09/2023
10
10
manager: nitinme
11
11
keywords: ChatGPT
12
12
---
13
13
14
-
# Learn how to work with the ChatGPT model (preview)
14
+
# Learn how to work with Chat Markup Language (preview)
15
15
16
-
The ChatGPT model (gpt-35-turbo) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat.
16
+
The ChatGPT model (`gpt-35-turbo`) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat. While the prompt format was designed specifically for multi-turn conversations, you'll find it can also work well for non-chat scenarios too.
17
17
18
-
The ChatGPT model uses the same [completion API](/azure/cognitive-services/openai/reference#completions) that you use for other models like text-davinci-002, but it requires a unique prompt format. It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses.
18
+
The ChatGPT model can be used with the same [completion API](/azure/cognitive-services/openai/reference#completions) that you use for other models like text-davinci-002, but it requires a unique prompt format known as Chat Markup Language (ChatML). It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses.
19
19
20
20
## Working with the ChatGPT model
21
21
22
22
The following code snippet shows the most basic way to use the ChatGPT model. We also have a UI driven experience that you can learn about in the [ChatGPT Quickstart](../chatgpt-quickstart.md).
23
23
24
-
> [!NOTE]
25
-
> OpenAI continues to improve the ChatGPT model and release new versions. During the preview of this model, we'll continue updating to the latest version of the model in place. This means that you may see small changes in the behavior of the model during the preview.
prompt="<|im_start|>system\nAssistant is a large language model trained by OpenAI.\n<|im_end|>\n<|im_start|>user\nWhat's the difference between garbanzo beans and chickpeas?\n<|im_end|>\n<|im_start|>assistant\n",
38
35
temperature=0,
39
-
max_tokens=800,
36
+
max_tokens=500,
40
37
top_p=0.5,
41
38
stop=["<|im_end|>"])
42
39
43
40
print(response['choices'][0]['text'])
44
41
```
42
+
> [!NOTE]
43
+
> The following parameters aren't available with the gpt-35-turbo model: `logprobs`, `best_of`, and `echo`. If you set any of these parameters to a value other than their default, you'll get an error.
44
+
45
+
The `<|im_end|>` token indicates the end of a message. We recommend including `<|im_end|>` token as a stop sequence to ensure that the model stops generating text when it reaches the end of the message. You can read more about the special tokens in the [Chat Markup Language (ChatML)](#chatml) section.
46
+
47
+
Consider setting `max_tokens` to a slightly higher value than normal such as 300 or 500. This ensures that the model doesn't stop generating text before it reaches the end of the message.
48
+
49
+
## Model versioning
50
+
51
+
> [!NOTE]
52
+
> `gpt-35-turbo` is equivalent to the `gpt-3.5-turbo` model from OpenAI.
53
+
54
+
Unlike previous GPT-3 and GPT-3.5 models, the `gpt-35-turbo` model will continue to be updated. When creating a [deployment](./create-resource.md#deploy-a-model) of `gpt-35-turbo`, you'll also need to specify a model version.
45
55
46
-
The `<|im_end|>` token indicates the end of a message. We recommend including `<|im_end|>` token as a stop sequence to ensure that the model stops generating text when it reaches the end of the message. When you include the `<|im_end|>` token as a stop sequence, this ensures that the model stops generating text when it reaches the end of the message.
56
+
Currently, only version `"0301"` is available. This is equivalent to the `gpt-3.5-turbo-0301` model from OpenAI. We'll continue to make updated versions available in the future. You can find model deprecation times on our [models](../concepts/models.md) page.
47
57
48
-
Consider setting `max_tokens`to a slightly higher value than normal such as 500 or 800. This ensures that the model doesn't stop generating text before it reaches the end of the message.
58
+
One thing that's important to note is that Chat Markup Language (ChatML) will continue to evolve with the new versions of the model. You may need to make updates to your prompts when you upgrade to a new version of the model.
49
59
50
-
## ChatGPT prompt format
60
+
<aid="chatml"></a>
61
+
62
+
## Working with Chat Markup Language (ChatML)
51
63
52
64
> [!NOTE]
53
-
> OpenAI continues to improve the ChatGPT model and the prompt format may change or evolve in the future. We'll keep this document updated with the latest information.
65
+
> OpenAI continues to improve the `gpt-35-turbo` model and the Chat Markup Language used with the model will continue to evolve in the future. We'll keep this document updated with the latest information.
54
66
55
-
OpenAI trained the ChatGPT model on special tokens that delineate the different parts of the prompt. The prompt starts with a system message that is used to prime the model followed by a series of messages between the user and the assistant.
67
+
OpenAI trained the gpt-35-turbo model on special tokens that delineate the different parts of the prompt. The prompt starts with a system message that is used to prime the model followed by a series of messages between the user and the assistant.
56
68
57
-
When starting a conversation, you should have a prompt that looks similar to the following code block:
69
+
The format of a basic ChatML prompt is as follows:
58
70
59
71
```
60
72
<|im_start|>system
@@ -68,12 +80,12 @@ The user’s message goes here
68
80
69
81
### System message
70
82
71
-
The system message is included at the beginning of the prompt between the `<|im_start|>system` and `<|im_end|>` tokens. This message provides the initial instructions to the model. You can provide a variety of information including:
83
+
The system message is included at the beginning of the prompt between the `<|im_start|>system` and `<|im_end|>` tokens. This message provides the initial instructions to the model. You can provide various information in the system message including:
72
84
73
85
* A brief description of the assistant
74
-
*The personality of the assistant
75
-
* Instructions for the assistant
76
-
* Data or information needed for the model
86
+
*Personality traits of the assistant
87
+
* Instructions or rules you would like the instructions to follow
88
+
* Data or information needed for the model, such as relevant questions from an FAQ
77
89
78
90
You can customize the system message for your use case or just include a basic system message. The system message is optional, but it's recommended to at least include a basic one to get the best results.
79
91
@@ -120,14 +132,14 @@ Instructions:
120
132
- If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information.
121
133
<|im_end|>
122
134
<|im_start|>user
123
-
What is the IRS?
135
+
When are my taxes due?
124
136
<|im_end|>
125
137
<|im_start|>assistant
126
138
```
127
139
128
140
#### Using data for grounding
129
141
130
-
You can also include relevant data or information in the system message to give the model additional context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) or a product like [Azure Cognitive Search](https://azure.microsoft.com/services/search/) to retrieve the most relevant information at query time.
142
+
You can also include relevant data or information in the system message to give the model extra context for the conversation. If you only need to include a small amount of information, you can hard code it in the system message. If you have a large amount of data that the model should be aware of, you can use [embeddings](/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) or a product like [Azure Cognitive Search](https://azure.microsoft.com/services/search/) to retrieve the most relevant information at query time.
131
143
132
144
```
133
145
<|im_start|>system
@@ -142,12 +154,11 @@ Context:
142
154
What is Azure OpenAI Service?
143
155
<|im_end|>
144
156
<|im_start|>assistant
145
-
146
157
```
147
158
148
-
#### Few shot learning with ChatGPT
159
+
#### Few shot learning with ChatML
149
160
150
-
You can also give few shot examples to the model. The approach for few shot learning has changed slightly because of the new prompt format. You can now include a series of messages between the user and the assistant in the prompt as few shot examples. These examples can be used to seed answers to common questions to prime the model.
161
+
You can also give few shot examples to the model. The approach for few shot learning has changed slightly because of the new prompt format. You can now include a series of messages between the user and the assistant in the prompt as few shot examples. These examples can be used to seed answers to common questions to prime the model or teach particular behaviors to the model.
151
162
152
163
This is only one example of how you can use few shot learning with ChatGPT. You can experiment with different approaches to see what works best for your use case.
153
164
@@ -169,9 +180,40 @@ You can check the status of your tax refund by visiting https://www.irs.gov/refu
169
180
<|im_end|>
170
181
```
171
182
183
+
#### Using Chat Markup Language for non-chat scenarios
184
+
185
+
ChatML is designed to make multi-turn conversations easier to manage, but it also works well for non-chat scenarios.
186
+
187
+
For example, for an entity extraction scenario, you might use the following prompt:
188
+
189
+
```
190
+
<|im_start|>system
191
+
You are an assistant designed to extract entities from text. Users will paste in a string of text and you will respond with entities you've extracted from the text as a JSON object. Here's an example of your output format:
192
+
{
193
+
"name": "",
194
+
"company": "",
195
+
"phone_number": ""
196
+
}
197
+
<|im_end|>
198
+
<|im_start|>user
199
+
Hello. My name is Robert Smith. I’m calling from Contoso Insurance, Delaware. My colleague mentioned that you are interested in learning about our comprehensive benefits policy. Could you give me a call back at (555) 346-9322 when you get a chance so we can go over the benefits?
200
+
<|im_end|>
201
+
<|im_start|>assistant
202
+
```
203
+
204
+
205
+
## Preventing unsafe user inputs
206
+
207
+
It's important to add mitigations into your application to ensure safe use of the Chat Markup Language.
208
+
209
+
We recommend that you prevent end-users from being able to include special tokens in their input such as `<|im_start|>` and `<|im_end|>`. We also recommend that you include additional validation to ensure the prompts you're sending to the model are well formed and follow the Chat Markup Language format as described in this document.
210
+
211
+
You can also provide instructions in the system message to guide the model on how to respond to certain types of user inputs. For example, you can instruct the model to only reply to messages about a certain subject. You can also reinforce this behavior with few shot examples.
212
+
213
+
172
214
## Managing conversations with ChatGPT
173
215
174
-
The token limit of the ChatGPT model is 4096 tokens. This limit includes the token count from both the prompt and completion. The number of tokens in the prompt combined with the value of the `max_tokens` parameter must stay under 4096 or you'll receive an error.
216
+
The token limit for `gpt-35-turbo` is 4096 tokens. This limit includes the token count from both the prompt and completion. The number of tokens in the prompt combined with the value of the `max_tokens` parameter must stay under 4096 or you'll receive an error.
175
217
176
218
It’s your responsibility to ensure the prompt and completion falls within the token limit. This means that for longer conversations, you need to keep track of the token count and only send the model a prompt that falls within the token limit.
@@ -221,7 +263,7 @@ The simplest approach to staying under the token limit is to truncate the oldest
221
263
222
264
You can choose to always include as many tokens as possible while staying under the limit or you could always include a set number of previous messages assuming those messages stay within the limit. It's important to keep in mind that longer prompts take longer to generate a response and incur a higher cost than shorter prompts.
223
265
224
-
You can estimate the number of tokens in a string by using the [tiktoken](https://github.com/openai/tiktoken) Python library. While the exact encoding used by ChatGPT isn't supported yet in tiktoken, you can recreate it yourself by building off of the cl100k_base encoding.
266
+
You can estimate the number of tokens in a string by using the [tiktoken](https://github.com/openai/tiktoken) Python library as shown below.
0 commit comments