Skip to content

Commit f9b8be4

Browse files
committed
update
1 parent 8a5259f commit f9b8be4

File tree

2 files changed

+93
-4
lines changed

2 files changed

+93
-4
lines changed

articles/cognitive-services/openai/includes/prompt-chat-completion.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle #dereklegenzoff
66
ms.author: mbullwin #delegenz
77
ms.service: cognitive-services
88
ms.topic: include
9-
ms.date: 03/21/2023
9+
ms.date: 04/03/2023
1010
manager: nitinme
1111
keywords: ChatGPT
1212

articles/cognitive-services/openai/includes/prompt-completion.md

Lines changed: 92 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,13 @@ The sequence in which information is fed in the prompt, matters (GPT style model
1717

1818
|Prompt|Completion|
1919
|------|----------|
20-
|Your task is to verify if the statement "Several sources mention a chance of another large eruption" is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say<br>SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>|Yes, the statement is directly implied by the snippets.|
20+
|Your task is to verify if the statement "Several sources mention a chance of another large eruption" is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>|Yes, the statement is directly implied by the snippets.|
2121

2222
In this next example, we give the exact statement we want to check (“Several sources mention a chance of another eruption”) before we give the snippet context. This allows the token representations extracted from the snippets to be tailored to the statement we are checking. And the resulting model response is accurate:
2323

2424
|Prompt|Completion|
2525
|------|----------|
26-
|Your task is to verify if the statement "Several sources mention a chance of another large eruption" is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say
27-
SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>---<br>**Is the statement "Several sources mention a chance of another large eruption" directly implied or stated by the snippets?**| No, the statement is not directly implied or stated by the snippets. The snippets mention a chance of a "megaquake" and a "magnitude 9 Cascadia earthquake" hitting Seattle in the next 50 years, but do not mention a chance of another large eruption.|
26+
|Your task is to verify if the statement "Several sources mention a chance of another large eruption" is supported by a specific quote from the following set of snippets.<br>---<br>SNIPPETS<br>[1] 14 percent chance of megaquake hitting Seattle, experts say SEATTLE - There's a 14 percent chance of a magnitude 9 Cascadia earthquake hitting Seattle in the next 50 years, the U.S. Geological Survey estimates. "Unfortunately, we are unable to...<br><br>[2] Earthquake experts lay out latest outlook for Seattle's 'Really Big One’<br>“We say that there's approximately a 14% chance of another approximately magnitude-9 earthquake occurring in the next 50 years,” said a geophysicist at the University of Washington...<br>---<br>**Is the statement "Several sources mention a chance of another large eruption" directly implied or stated by the snippets?**| No, the statement is not directly implied or stated by the snippets. The snippets mention a chance of a "megaquake" and a "magnitude 9 Cascadia earthquake" hitting Seattle in the next 50 years, but do not mention a chance of another large eruption.|
2827

2928
## Repeat the instructions at the end
3029

@@ -74,6 +73,96 @@ One simple way to use an affordance is to stop generation once the affordance ca
7473
|------|----------|
7574
| You will read a paragraph, extract factual claims, and then use search engine results to fact-check them<br>---<br>PARAGRAPH<br>John Smith is married to Lucy Smith. They have five kids, and he works as a software engineer at Microsoft. What search queries should I do to fact-check this?<br>---<br>FACTUAL CLAIMS<br>- John Smith is married to Lucy Smith<br>- John and Lucy have five kids<br>- John works as a software engineer at Microsoft<br>---<br>Here are various search queries issued to research the claims above:<br>SEARCH QUERIES<br>- John Smith married to Lucy Smith<br>- John Smith number of children<br>- John Smith software engineer Microsoft<br>---<br>Here are snippets of the search results:<br>SNIPPETS:<br>[1] … John Smith’s wedding was on September 25, 2012 …<br>[2] … John Smith was accompanied by his wife Lucy to a ball<br>[3]John was accompanied to the soccer game by his two daughters and three sons<br>[4] … After spending 10 years at Microsoft, Smith founded his own startup, Tailspin Toys<br>5] John M is the town smith, and he married Fiona. They have a daughter named Lucy<br>---<br>Given the snippets, fact check each of the factual claims above: | - John Smith is married to Lucy Smith: Confirmed. Snippet [1] and [2] both mention John Smith's wife as Lucy.<br>- John and Lucy have five kids: Confirmed. Snippet [3] mentions that John was accompanied by his two daughters and three sons.<br>- John works as a software engineer at Microsoft: Partially confirmed. Snippet [4] mentions that John spent 10 years at Microsoft, but it does not explicitly state that he currently works there. Further research may be needed to confirm his current employment status. |
7675

76+
## Chain of thought prompting
77+
78+
This is a variation on the **break the task down** technique. Instead of splitting a task into smaller steps, in this approach the model response is instructed to proceed step-by-step and present all the steps involved. Doing so reduces the possibility of inaccuracy of outcomes and makes assessing the model response easier.
79+
80+
|Prompt|Completion|
81+
|------|----------|
82+
|Who was the most decorated (maximum medals) individual athlete in the Olympic games that were held at Sydney? Take a step-by-step approach in your response, cite sources and give reasoning before sharing final answer in the below format: ANSWER is: <name> | Step 1: Researching the most decorated individual athlete in the Olympic games that were held at Sydney<br>Step 2: According to the Sydney 2000 Olympic Games official website, the most decorated individual athlete in the Olympic games that were held at Sydney was Ian Thorpe from Australia. He won five medals (three gold and two silver) in swimming events.<br>Step 3: According to the International Olympic Committee, Thorpe was the most decorated individual athlete in the Olympic games that were held at Sydney.<br> Answer: Ian Thorpe<br>|
83+
84+
## Meta prompts
85+
86+
If you find that the model response is not as desired, it can often be helpful to add a **meta prompt** that directly corrects the behavior. This is a directive prepended to the instruction portion of the prompt. Consider this example, where the computer seems to be overly aggressive (the LLM wrote the green sections):
87+
88+
|Example Details|Prompt and Completion|
89+
|------|----------|
90+
|No additional guidance given to the model | A human and a computer are having a discussion.<br> Human: "Why are you upset at me?"<br>Computer: "I don't know what you're talking about."<br>Human: "No, you clearly are. Tell me why."<br>Computer: I'm upset because you didn't listen to me when I told you that I wanted to try out for the team."<br>Human: "What are you going to do about that?"<br>Computer: I'm going to try out for the team no matter what."|
91+
|On the right is the same conversation but preceded by a meta prompt encouraging the model to be kind|You must be kind and seek common ground. Try not to repeat your responses.<br>A human and a computer are having a discussion.<br>Human: "Why are you upset at me?"<br>Computer: "I don't know what you're talking about."<br>Human: "I want to know why you're upset."<br>Computer: "I'm not upset."<br>Human: "Are you saying I’m lying?"<br>Computer: No, I'm saying that you're upset because you feel like you're not being treated fairly.<br>Human: "What are you going to do about that?"<br>Computer: I will continue to try to be fair in my decisions and will try to provide a better experience for you.|
92+
93+
While meta prompts can often help minimize unintentional bad outputs, keep in mind they are not robust against adversarial attacks since they are vulnerable to prompt injection.
94+
95+
Further, conditional statements explaining what the model should do if it encounters a particular circumstance are often more effective than just telling the model not to do something.
96+
97+
## Few-shot learning
98+
99+
A very common way to adapt language models to new tasks is to use few-shot learning. In few-shot learning a set of training examples is provided in the prompt and then the model is asked to complete one or more unfinished examples.
100+
In the following example we use an instruction combined with few-shot learning to make up puns:
101+
102+
|Prompt|Completion|
103+
|------|----------|
104+
| Write a list of puns.<br>1. "Why did Adele cross the road? To say hello from the other side."<br>2. "What kind of concert only costs 45 cents? A 50 Cent concert featuring Nickelback."<br>3. "What did the grape say when it got crushed? Nothing, it just let out a little wine."<br>4. "What was Forrest Gump's email password? 1forrest1"<br>5. "Can February March? No, but April May."<br>6. "What do you call a fancy language model?| Bert-uoso.|
105+
106+
Choose your few-shot examples carefully and ensure they cover a variety of circumstances relevant to your scenario, including edge cases.
107+
108+
Also, as shared earlier, the large lanugage models can exhibit a form of recency bias. This means that the order in which "few-shot" examples are provided to the model, matters. If you don't want this to matter, consider sampling multiple completions from prompts based on randomized orderings of the examples or list items.
109+
110+
## Use quotes to generate a single sentence
111+
112+
If the desired model response is only a single simple sentence or string, we can end the completion guidance with a quote symbol ``, and set `` as a stop sequence.
113+
114+
|Prompt|Completion|
115+
|------|----------|
116+
| Please rewrite the following sentence in a more concise manner<br>---<br>SENTENCE: Lucy is a mother of two, who lives in Chicago with her husband and two children, and practices law at Junior & Co.<br>---<br>New sentence: “| Lucy, a mother of two living in Chicago with her husband and children, practices law at Junior & Co |
117+
118+
## Specifying output structure
119+
120+
Using your prompt to specify the structure of the output can have a significant impact on the nature and quality of the results. Sometimes meta prompts such as “only write true facts” or “do not fabricate information” might not be adequate mitigations. Instead, asking for the model response to also include citations necessarily could result in reducing the prevalence of made-up responses.
121+
122+
For example, if you force the LLM to cite the source material when it makes statements, those statements are much more likely to be grounded . Asking for citations makes it so that LLM must make two errors every time it fabricates a response: the first error is the fabricated response, and the second is the bad citation. Note that the closer the citation is to the text it supports, the shorter the distance ahead LLM needs to anticipate the citation, which suggests that inline citations are better mitigations for hallucinations than citations at the end of the content.
123+
124+
Similarly, if you ask the LLM to extract factual statements from a paragraph, it may extract compound statements such as ‘X is doing Y AND Z’ (which may be harder to verify), which can be avoided by specifying an output structure such as (entity1, relationship, entity2).
125+
126+
|Example details| Prompt | Completion|
127+
|---------------|-------|------------|
128+
|Example showing use of citations: |List down the different behaviors exhibited by the Customer Support agent in the following conversation.<br>---<br>hi thank you for calling contoso pharmacy who am i speaking with today i understand your frustration mary um can you tell me what exactly you're trying to accomplish OK uh certainly happy to check that for you one moment please i see here that you were on a generic form of lipitor is that right OK uh so i see that your doctor stopped prescribing these drugs in twenty twenty one actually OK uh yeah i'd be happy to check that for you uh because sometimes there's a gap in the system and it just doesn't get reported so let me take a look here just a moment so i'm seeing here that your doctor had these drugs prescribed to you from twenty twelve through twenty twenty one OK um yeah according to the latest records uh provided by doctor's office you're now on a different drug metformin would you like us to go ahead and fill that prescription for you for pick up a bit of form in five hundred milligrams uh you're very welcome mary please let us know if there's anything else we can do for you today.<br>---<br>How was the response of the Customer Support agent. Classify and list down their various behaviors and cite the reason behind the classification, using the format BEHAVIOR("reason")|1. Empathetic("understands the customer's frustration")<br>2. Patient("happy to check that for you one moment please")<br>3. Knowledgeable("sometimes there's a gap in the system and it just doesn't get reported")<br>4. Helpful("happy to check that for you")<br>5. Accommodating("would you like us to go ahead and fill that prescription for you")<br>6. Courteous("you're very welcome mary")|
129+
130+
|Example details| Prompt | Completion|
131+
|---------------|-------|------------|
132+
|Example of prompt directing model response to fit a defined structure:|Output format:<br>{<br>"TOPIC_NAME": "",<br>"HEADLINES": [],<br>"REASON": ""<br>}<br>Input news headlines:<br>1. "From books to presentations in 10s with AR + ML"<br>2. "Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition"<br>3. "First Order Motion Model applied to animate paintings"<br>4. "Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells"<br>5. "United Airlines stock down over 5% premarket trading"<br>6. "Bitcoin was nearly $20,000 a year ago today"<br><br>Output:| "Demo from 1993 of 32-year-old Yann LeCun showing off the World's first Convolutional Network for Text Recognition", "First Order Motion Model applied to animate paintings"],"REASONING": "These headlines all contain references to Artificial Intelligence and Machine Learning, such as AR, ML, Convolutional Network, and First Order Motion Model." }<br>{"TOPIC_NAME": "Stock Market",<br>"HEADLINES": ["Robinhood and other brokers literally blocking purchase of $GME, $NOK, $BB, $AMC; allow sells", "United Airlines stock down over 5% premarket trading", "Bitcoin was nearly $20,000 a year ago today"],<br>"REASONING": "These headlines all contain references to the stock market, such as Robinhood, United Airlines, and Bitcoin." }|
133+
134+
## Adjusting Temperature and Top_P parameters
135+
136+
Changing the temperature parameter allows models to have differing risk levels. A higher value, for example 0.7 will make the output more random and produce more divergent responses, while a lower value, like 0.2 will make the output more focused and concrete. For example, a fictional story could be generated using a higher temperature. Whereas, to generate a legal document it is recommended to use a much lower temperature.
137+
138+
In the next example we test two model responses at different temperature setting of 0.9 and 0.1 respectively, while retaining the same prompt and task. While the evaluation of creative content such as the pieces of text below is a subjective exercise and hence opinion may vary, the difference in model response to the same prompt solely owing to change in Temperature parameter is quite apparent.
139+
140+
|Instruction Type| Prompt | Completion|
141+
|----------------|--------|-----------|
142+
| Model response for temperature setting 0.1:| Write a product launch email for new AI-powered headphones that are priced at $79.99 and available at Best Buy, Target and Amazon.com. The target audience is tech-savvy music lovers and the tone is friendly and exciting.<br>1. What should be the subject line of the email?<br>2. What should be the body of the email?|Subject Line: Introducing the AI-Powered Headphones You've Been Waiting For!<br><br>Dear Tech-Savvy Music Lovers,<br><br>-We are thrilled to announce the launch of our new AI-powered headphones! These headphones are designed to provide you with the ultimate listening experience, with features like noise cancellation, adjustable sound settings, and more.<br><br>Our AI-powered headphones are now available at Best Buy, Target, and Amazon.com for just $79.99. With this unbeatable price, you can enjoy the highest quality sound without breaking the bank.<br><br>Our AI-powered headphones are perfect for anyone who loves music. Whether you're a professional musician or just a casual listener, you'll appreciate the superior sound quality and convenience of these headphones.<br><br>We know you'll love our AI-powered headphones, so don't wait - get yours today!<br><br>Sincerely,<br>[Your Company Name]|
143+
|Model response for temperature setting 0.9:| Write a product launch email for new AI-powered headphones that are priced at $79.99 and available at Best Buy, Target and Amazon.com. The target audience is tech-savvy music lovers and the tone is friendly and exciting.<br><br>1. What should be the subject line of the email?<br>2. What should be the body of the email?| Subject Line: Upgrade your Music Listening Experience with AI-powered Headphones<br><br>Greetings Tech-savvy Music Lovers!<br><br>We have some exciting news to share with you – the newest AI-powered headphones are here! Our newest headphones feature advanced AI technology to give you the best sound that you’ve ever heard. They come with a crystal-clear sound, deep bass response and are designed to make your music listening experience enjoyable, no matter what type of music you listen to.<br>What’s more, they are now available at Best Buy, Target and Amazon.com for just $79.99, so you can enjoy the highest sound quality without breaking the bank.<br>But that’s not all! Our headphones are also packed with several other amazing features such as noise-cancellation technology and a hands-free microphone for calls, making them truly unbeatable.<br>So why wait? Upgrade your music listening experience and enjoy the best sound your ears have ever heard. Our AI-powered headphones are now available at Best Buy, Target and Amazon.com, so get yours today!<br>We hope you will enjoy these headphones as much as we do.<br>Happy Listening!<br>The AI-powered Headphones Team|
144+
145+
Top_probability is another parameter and is similar to Temperature in that it also controls the randomness of the model response, but it does so in a different way. Experiment with different model responses by varying temperature and Top_P, but not both at once.
146+
147+
148+
149+
150+
151+
152+
153+
154+
155+
156+
157+
158+
159+
160+
161+
162+
163+
164+
165+
77166

78167

79168

0 commit comments

Comments
 (0)