You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/includes/prompt-chat-completion.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ When using the Chat Completions API, a series of messages between the User and A
49
49
50
50
| System message |Few-shot examples |
51
51
|----------------|------------------|
52
-
| Assistant is an intelligent chatbot designed to help users answer their tax related questions.<br> Instructions <br> - Only answer questions related to taxes. <br> - If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information.|**User** - "When do I need to file my taxes by?"<br>**Assistant** - "In 2023, you will need to file your taxes by April 18th. The date falls after the usual April 15th deadline because April 15th falls on a Saturday in 2023. For more details, see https://www.irs.gov/filing/individuals/when-to-file " <br>**User** - "How can I check the status of my tax refund?"<br>**Assistant** - "You can check the status of your tax refund by visiting https://www.irs.gov/refunds " |
52
+
| Assistant is an intelligent chatbot designed to help users answer their tax related questions.<br> Instructions <br> - Only answer questions related to taxes. <br> - If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information.|**User** - "When do I need to file my taxes by?"<br>**Assistant** - "In 2023, you will need to file your taxes by April 18th. The date falls after the usual April 15th deadline because April 15th falls on a Saturday in 2023. For more details, see https://www.irs.gov/filing/individuals/when-to-file " <br><br>**User** - "How can I check the status of my tax refund?"<br>**Assistant** - "You can check the status of your tax refund by visiting https://www.irs.gov/refunds " |
53
53
54
54
The table above describes the basic prompt structure, but for a detailed explanation of the exact prompt format you will need to refer to the [Chat Completion guide](../how-to/chatgpt.md).
55
55
@@ -65,7 +65,8 @@ While the Chat Completion API is optimized to work with multi-turn conversations
65
65
66
66
The sequence in which information appears in the prompt matters. This is because GPT style models are built in a certain way that defines how they process the input. Our research suggests that telling the model the task you want it to do at the beginning of the prompt, before sharing additional contextual information or examples, can help produce higher-quality outputs.
67
67
68
-
Although following this technique is still generally recommended, in contrast to previous model versions (GPT-3 and prior), our testing showed that the model response with ChatGPT and GPT-4 models was the same regardless of whether the technique is utilized. In the case of the below example, we see that adding the statement “Several sources … eruption” at the beginning or the end of the prompt did not cause any change in the final model response.
68
+
> [!NOTE]
69
+
> Although following this technique is still generally recommended, in contrast to previous model versions (GPT-3 and prior), our testing showed that the model response with ChatGPT and GPT-4 models was the same regardless of whether the technique is utilized. In the case of the below example, we see that adding the statement “Several sources … eruption” at the beginning or the end of the prompt did not cause any change in the final model response.
69
70
70
71
| System message |User | Assistant |
71
72
|----------------|---------|------------|
@@ -95,13 +96,15 @@ In the example below, separators (`---` in this case) have been added between di
95
96
|----------------|---------|------------|
96
97
| You are an AI assistant that helps people find information.| You will read a paragraph, and then issue queries to a search engine in order to fact-check it. Also explain the queries.<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>---<br>QUERIES|1. "John Smith Microsoft" - To check if John Smith is indeed employed at Microsoft.<br>2. "John Smith Lucy Smith" - To check if John Smith is married to Lucy Smith.<br>3. "John Smith children" - To check if John Smith has five children.|
97
98
99
+
If you're not sure what syntax to use, consider using Markdown or XML. The models have been trained on a lot of web content in XML and Markdown which may provide better results.
100
+
98
101
## Break the task down
99
102
100
103
Large language models (LLMs) often perform better if the task is broken down into smaller steps. For example, in the search query prompt referenced earlier, the prompt can be restructured so that the model is first instructed to extract relevant facts, and then instructed to generate search queries that can be used to verify those facts.
101
104
102
105
| System message |User | Assistant |
103
106
|----------------|---------|------------|
104
-
| You are an AI assistant that helps people find information. | You will read a paragraph, and then issue queries to a search engine in order to fact-check it.<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>Now you will extract factual claims, and issue queries to fact-check them. When issuing a query, use the function SEARCH("query")<br><br>FACTUAL CLAIMS | 1. John Smith is married to Lucy Smith <br>2. They have five kids<br>3. He works as a software engineer at Microsoft<br><br>QUERIES<br>SEARCH("John Smith married Lucy Smith")<br>SEARCH("John Smith family size")<br>SEARCH("John Smith Microsoft software engineer") |
107
+
| You are an AI assistant that helps people find information. | You will read a paragraph, and then issue queries to a search engine in order to fact-check it.<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>---<br>Now you will extract factual claims, and issue queries to fact-check them. When issuing a query, use the function SEARCH("query")<br><br>FACTUAL CLAIMS | 1. John Smith is married to Lucy Smith <br>2. They have five kids<br>3. He works as a software engineer at Microsoft<br><br>QUERIES<br>SEARCH("John Smith married Lucy Smith")<br>SEARCH("John Smith family size")<br>SEARCH("John Smith Microsoft software engineer") |
105
108
106
109
Notice the use of clear syntax to differentiate the sections and prime the output. In this simple example, breaking the task down from one to two steps is not very dramatic, but when trying to do this for a larger piece of text with many factual claims, breaking the task down can make a significant difference.
Copy file name to clipboardExpand all lines: articles/cognitive-services/openai/includes/prompt-completion.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,11 +18,13 @@ keywords: ChatGPT
18
18
19
19
The sequence in which information is fed into the prompt matters. Our research suggests that telling the model the task you want it to do at the beginning of the prompt, before sharing additional contextual information or examples, can help produce higher-quality outputs.
20
20
21
+
In the first example below, a key part of the instructions is shared only at the end, and the resulting completion is incorrect.
22
+
21
23
|Prompt|Completion|
22
24
|------|----------|
23
-
|Your task is to verify if the statement "Several sources mention a chance of another large eruption" is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>|Yes, the statement is directly implied by the snippets.|
25
+
|Your task is to verify if a statement is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>|Yes, the statement is directly implied by the snippets.|
24
26
25
-
In this next example, we give the exact statement we want to check (“Several sources mention a chance of another eruption”) before we give the snippet context. This allows the token representations extracted from the snippets to be tailored to the statement we are checking. Now the resulting model response is accurate:
27
+
In this next example, we give the exact statement we want to check (“Several sources mention a chance of another eruption”) *before* we give the snippet context. This allows the token representations extracted from the snippets to be tailored to the statement we are checking. Now the resulting model response is accurate:
26
28
27
29
|Prompt|Completion|
28
30
|------|----------|
@@ -62,7 +64,7 @@ Models often perform better if the task is broken down into smaller steps. For e
62
64
63
65
|Prompt|Completion|
64
66
|------|----------|
65
-
|You will read a paragraph, and then issue queries to a search engine in order to fact-check it.<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>Now you will extract factual claims, and issue queries to fact-check them. When issuing a query, use the function SEARCH("query")<br><br>FACTUAL CLAIMS | 1. John Smith is married to Lucy Smith <br>2. They have five kids<br>3. He works as a software engineer at Microsoft<br><br>QUERIES<br>SEARCH("John Smith married Lucy Smith")<br>SEARCH("John Smith family size")<br>SEARCH("John Smith Microsoft software engineer") |
67
+
|You will read a paragraph, and then issue queries to a search engine in order to fact-check it.<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>---<br>Now you will extract factual claims, and issue queries to fact-check them. When issuing a query, use the function SEARCH("query")<br><br>FACTUAL CLAIMS | 1. John Smith is married to Lucy Smith <br>2. They have five kids<br>3. He works as a software engineer at Microsoft<br><br>QUERIES<br>SEARCH("John Smith married Lucy Smith")<br>SEARCH("John Smith family size")<br>SEARCH("John Smith Microsoft software engineer") |
66
68
67
69
Notice the use of clear syntax to differentiate the sections and prime the output. In this simple example, breaking the task down from one to two steps is not very dramatic, but when trying to do this for a larger piece of text with many factual claims, breaking the task down can make a significant difference.
68
70
@@ -132,18 +134,18 @@ Similarly, if you ask the model to extract factual statements from a paragraph,
132
134
133
135
|Example details| Prompt | Completion|
134
136
|---------------|-------|------------|
135
-
|Example of prompt directing model response to fit a defined structure:|Output format:<br>{<br>"TOPIC_NAME": "",<br>"HEADLINES": [],<br>"REASON": ""<br>}<br>Input news headlines:<br>1. "From books to presentations in 10s with AR + ML"<br>2. "Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition"<br>3. "First Order Motion Model applied to animate paintings"<br>4. "Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells"<br>5. "United Airlines stock down over 5% premarket trading"<br>6. "Bitcoin was nearly $20,000 a year ago today"<br><br>Output:| "Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition", "First Order Motion Model applied to animate paintings"],"REASONING": "These headlines all contain references to Artificial Intelligence and Machine Learning, such as AR, ML, Convolutional Network, and First Order Motion Model." }<br>{"TOPIC_NAME": "Stock Market",<br>"HEADLINES": ["Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells", "United Airlines stock down over 5% premarket trading", "Bitcoin was nearly $20,000 a year ago today"],<br>"REASONING": "These headlines all contain references to the stock market, such as Robinhood, United Airlines, and Bitcoin." }|
137
+
|Example of prompt directing model response to fit a defined structure:|Output format:<br>{<br>"TOPIC_NAME": "",<br>"HEADLINES": [],<br>"REASON": ""<br>}<br>Input news headlines:<br>1. "From books to presentations in 10s with AR + ML"<br>2. "Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition"<br>3. "First Order Motion Model applied to animate paintings"<br>4. "Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells"<br>5. "United Airlines stock down over 5% premarket trading"<br>6. "Bitcoin was nearly $20,000 a year ago today"<br><br>Output:|{"TOPIC_NAME": "Artificial Intelligence and Machine Learning",<br>"HEADLINES": ["From books to presentations in 10s with AR + ML",<br>"Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition", "First Order Motion Model applied to animate paintings"],"REASONING": "These headlines all contain references to Artificial Intelligence and Machine Learning, such as AR, ML, Convolutional Network, and First Order Motion Model." }<br>{"TOPIC_NAME": "Stock Market",<br>"HEADLINES": ["Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells", "United Airlines stock down over 5% premarket trading", "Bitcoin was nearly $20,000 a year ago today"],<br>"REASONING": "These headlines all contain references to the stock market, such as Robinhood, United Airlines, and Bitcoin." }|
136
138
137
139
## Adjusting Temperature and Top_P parameters
138
140
139
141
Changing the temperature parameter allows models to have differing risk levels. A higher value, for example 0.7 will make the output more random and produce more divergent responses. While a lower value, like 0.2 will make the output more focused and concrete. For example, a fictional story could be generated using a higher temperature. Whereas to generate a legal document it's recommended to use a much lower temperature.
140
142
141
-
In the next example we test two model responses at different temperature settings of 0.9 and 0.1 respectively. While the evaluation of creative content such as the pieces of text below is a subjective exercise and opinions may vary, the difference in model response to the same prompt solely from changing the temperature parameter is apparent.
143
+
In the next example we test two model responses at different temperature settings of 0.1 and 0.9 respectively. While the evaluation of creative content such as the pieces of text below is a subjective exercise and opinions may vary, the difference in model response to the same prompt solely from changing the temperature parameter is apparent.
142
144
143
145
|Instruction Type| Prompt | Completion|
144
146
|----------------|--------|-----------|
145
147
| Model response for temperature setting 0.1:| Write a product launch email for new AI-powered headphones that are priced at $79.99 and available at Best Buy, Target and Amazon.com. The target audience is tech-savvy music lovers and the tone is friendly and exciting.<br>1. What should be the subject line of the email?<br>2. What should be the body of the email?|Subject Line: Introducing the AI-Powered Headphones You've Been Waiting For!<br><br>Dear Tech-Savvy Music Lovers,<br><br>-We are thrilled to announce the launch of our new AI-powered headphones! These headphones are designed to provide you with the ultimate listening experience, with features like noise cancellation, adjustable sound settings, and more.<br><br>Our AI-powered headphones are now available at Best Buy, Target, and Amazon.com for just $79.99. With this unbeatable price, you can enjoy the highest quality sound without breaking the bank.<br><br>Our AI-powered headphones are perfect for anyone who loves music. Whether you're a professional musician or just a casual listener, you'll appreciate the superior sound quality and convenience of these headphones.<br><br>We know you'll love our AI-powered headphones, so don't wait - get yours today!<br><br>Sincerely,<br>[Your Company Name]|
146
-
|Model response for temperature setting 0.9:| Write a product launch email for new AI-powered headphones that are priced at $79.99 and available at Best Buy, Target and Amazon.com. The target audience is tech-savvy music lovers and the tone is friendly and exciting.<br><br>1. What should be the subject line of the email?<br>2. What should be the body of the email?| Subject Line: Upgrade your Music Listening Experience with AI-powered Headphones<br><br>Greetings Tech-savvy Music Lovers!<br><br>We have some exciting news to share with you – the newest AI-powered headphones are here! Our newest headphones feature advanced AI technology to give you the best sound that you’ve ever heard. They come with a crystal-clear sound, deep bass response and are designed to make your music listening experience enjoyable, no matter what type of music you listen to.<br>What’s more, they are now available at Best Buy, Target and Amazon.com for just $79.99, so you can enjoy the highest sound quality without breaking the bank.<br>But that’s not all! Our headphones are also packed with several other amazing features such as noise-cancellation technology and a hands-free microphone for calls, making them truly unbeatable.<br>So why wait? Upgrade your music listening experience and enjoy the best sound your ears have ever heard. Our AI-powered headphones are now available at Best Buy, Target and Amazon.com, so get yours today!<br>We hope you will enjoy these headphones as much as we do.<br>Happy Listening!<br>The AI-powered Headphones Team|
148
+
|Model response for temperature setting 0.9:| Write a product launch email for new AI-powered headphones that are priced at $79.99 and available at Best Buy, Target and Amazon.com. The target audience is tech-savvy music lovers and the tone is friendly and exciting.<br><br>1. What should be the subject line of the email?<br>2. What should be the body of the email?| Subject Line: Upgrade your Music Listening Experience with AI-powered Headphones<br><br>Greetings Tech-savvy Music Lovers!<br><br>We have some exciting news to share with you – the newest AI-powered headphones are here! Our newest headphones feature advanced AI technology to give you the best sound that you’ve ever heard. They come with a crystal-clear sound, deep bass response and are designed to make your music listening experience enjoyable, no matter what type of music you listen to.<br><br>What’s more, they are now available at Best Buy, Target and Amazon.com for just $79.99, so you can enjoy the highest sound quality without breaking the bank.<br><br>But that’s not all! Our headphones are also packed with several other amazing features such as noise-cancellation technology and a hands-free microphone for calls, making them truly unbeatable.<br><br>So why wait? Upgrade your music listening experience and enjoy the best sound your ears have ever heard. Our AI-powered headphones are now available at Best Buy, Target and Amazon.com, so get yours today!<br><br>We hope you will enjoy these headphones as much as we do.<br><br>Happy Listening!<br><br>The AI-powered Headphones Team|
147
149
148
150
Top_probability is another parameter and is similar to temperature in that it also controls the randomness of the model response, but it does so in a different way. Experiment with different model responses by varying temperature and Top_P, but not both at once.
0 commit comments