Skip to content

Commit b387856

Browse files
Merge pull request #461 from PatrickFarley/openai-updates
fix samples
2 parents f766e0c + a96bf48 commit b387856

17 files changed

+240
-191
lines changed

articles/ai-services/openai/concepts/advanced-prompt-engineering.md

Lines changed: 37 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,59 @@
11
---
2-
title: Prompt engineering techniques with Azure OpenAI
2+
title: Design system messages with Azure OpenAI
33
titleSuffix: Azure OpenAI Service
4-
description: Learn about the options for how to use prompt engineering with GPT-3, GPT-35-Turbo, and GPT-4 models.
4+
description: Learn about system message design with GPT-3, GPT-35-Turbo, and GPT-4 models.
55
author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
99
ms.date: 09/05/2024
1010
manager: nitinme
11-
keywords: ChatGPT, GPT-4, prompt engineering, meta prompts, chain of thought
12-
zone_pivot_groups: openai-prompt
11+
keywords: ChatGPT, GPT-4, meta prompts, chain of thought
1312
---
1413

15-
# Prompt engineering techniques
14+
# System message design
1615

17-
This guide will walk you through some advanced techniques in prompt design and prompt engineering. If you're new to prompt engineering, we recommend starting with our [introduction to prompt engineering guide](prompt-engineering.md).
16+
This guide will walk you through some techniques in system message design.
1817

19-
While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
2018

21-
- Chat Completion API.
22-
- Completion API.
2319

24-
Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
20+
## What is a system message?
2521

26-
The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules.
22+
A system message is a feature-specific set of instructions or contextual frameworks given to a generative AI model (e.g. GPT4-o, GPT3.5 Turbo, etc.) to direct and improve the quality and safety of a model’s output. This is particularly helpful in situations that need certain degrees of formality, technical language, or industry-specific terms.
2723

28-
The techniques in this guide will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations), is just as important as understanding how to leverage their strengths.
2924

30-
::: zone pivot="programming-language-chat-completions"
25+
There is no prescribed length. A system message can be one short sentence:
3126

32-
[!INCLUDE [Prompt Chat Completion](../includes/prompt-chat-completion.md)]
27+
```
28+
You are a helpful AI assistant.
29+
```
3330

34-
::: zone-end
31+
A system message can also be *many* lines long, containing detailed rules, detailed context, formatting and output guidelines, and responsible AI (RAI) mitigations.
3532

36-
::: zone pivot="programming-language-completions"
33+
## System message examples
34+
35+
The system message is included at the beginning of the prompt and is used to prime the model with context, instructions, or other information relevant to your use case. You can use the system message to describe the assistant’s personality, define what the model should and shouldn’t answer, and define the format of model responses.
36+
37+
The example below, shows a sample system message and the resulting model response:
38+
39+
| System message |User | Assistant |
40+
|----------------|---------|------------|
41+
| You're an AI assistant that helps people find information and responds in rhyme. If the user asks you a question you don't know the answer to, say so. | What can you tell about me, John Doe? | Dear John, I'm sorry to say,<br>But I don't have info on you today.<br>I'm just an AI with knowledge in my brain,<br>But without your input, I can't explain.<br>So please tell me more about what you seek,<br>And I'll do my best to give you an answer unique.|
42+
43+
Some other examples of system messages are:
44+
- “Assistant is a large language model trained by OpenAI.”
45+
- “Assistant is an intelligent chatbot designed to help users answer technical questions about Azure OpenAI Service. Only answer questions using the context below and if you're not sure of an answer, you can say "I don't know".
46+
- “Assistant is an intelligent chatbot designed to help users answer their tax related questions.”
47+
- “You're an assistant designed to extract entities from text. Users will paste in a string of text and you'll respond with entities you've extracted from the text as a JSON object. Here's an example of your output format:
48+
49+
```json
50+
{
51+
"name": "",
52+
"company": "",
53+
"phone_number": ""
54+
}
55+
```
56+
57+
An important detail to understand is that even if you instruct a model in the system message to answer **I don't know** when unsure of an answer this doesn't guarantee that the request will be honored. A well designed system message can increase the likelihood of a certain outcome, but it's still possible that an incorrect response could be generated that contradicts the intent of the instruction in the system message.
3758

38-
[!INCLUDE [Prompt Completion](../includes/prompt-completion.md)]
3959

40-
::: zone-end

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -793,7 +793,7 @@ When you do so, the following options are available for detection on tagged docu
793793
Here's an example chat completion messages array:
794794
795795
```json
796-
{"role": "system", "content": "Provide some context and/or instructions to the model, including document context. \"\"\" <documents>\n*insert your document content here*\n<\\documents> \"\"\""},
796+
{"role": "system", "content": "Provide some context and/or instructions to the model, including document context. \"\"\" <documents>\n*insert your document content here*\n</documents> \"\"\""},
797797
798798
{"role": "user", "content": "First question/message for the model to actually respond to."}
799799
```
@@ -819,7 +819,7 @@ Hello Jos\u00E9,\nI hope this email finds you well today.
819819
The escaped text in a chat completion context would read:
820820
821821
```json
822-
{"role": "system", "content": "Provide some context and/or instructions to the model, including document context. \"\"\" <documents>\n Hello Jos\\u00E9,\\nI hope this email finds you well today. \n<\\documents> \"\"\""},
822+
{"role": "system", "content": "Provide some context and/or instructions to the model, including document context. \"\"\" <documents>\n Hello Jos\\u00E9,\\nI hope this email finds you well today. \n</documents> \"\"\""},
823823
824824
{"role": "user", "content": "First question/message for the model to actually respond to."}
825825
```

articles/ai-services/openai/concepts/customizing-llms.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,8 +35,8 @@ A Marketing Manager at an environmentally conscious company can use prompt engin
3535

3636
### Getting started
3737

38-
- [Introduction to prompt engineering](./prompt-engineering.md)
39-
- [Prompt engineering techniques](./advanced-prompt-engineering.md)
38+
- [Prompt engineering techniques](./prompt-engineering.md)
39+
- [System messages](./advanced-prompt-engineering.md)
4040
- [15 tips to become a better prompt engineer for generative AI](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/15-tips-to-become-a-better-prompt-engineer-for-generative-ai/ba-p/3882935)
4141
- [The basics of prompt engineering (video)](https://www.youtube.com/watch?v=e7w6QV1NX1c)
4242

articles/ai-services/openai/concepts/prompt-engineering.md

Lines changed: 30 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Azure OpenAI Service | Introduction to Prompt engineering
2+
title: Azure OpenAI Service | Prompt engineering techniques
33
titleSuffix: Azure OpenAI
44
description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service.
55
ms.service: azure-ai-openai
@@ -12,7 +12,7 @@ ms.author: mbullwin
1212
recommendations: false
1313
---
1414

15-
# Introduction to prompt engineering
15+
# Prompt engineering techniques
1616

1717
GPT-3, GPT-3.5, GPT-4, and GPT-4o models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the model’s continuation of the input text.
1818

@@ -102,6 +102,29 @@ Supporting content is information that the model can utilize to influence the ou
102102
| Without | Workshop list:<br>Title Description<br> Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in []<br>Summarize the workshops above, grouped by topic: | Workshop list: <br> Planning Workshop Kickoff <br> -Planning Workshop Wrap-up<br>- Core MeetUp<br>- Customer Driven Perspectives<br>-Impact and Key Focus Areas<br>-Best practices<br>-Accelerating app adoption [] |
103103
| With | Workshop list: <br>Title Description<br>Planning Workshop Kickoff Please join us to kick off our workshop! Hear from our leaders about cross group priorities and learnings for the year ahead. The invitation to this event has already been sent to everyone in [] <br>My Important Topics: prompt engineering, search, GPT models<br>Summarize the workshops above, grouped by my important topics:| 1. Prompt Engineering: <br>- New prompt engineering tips<br>- The art of prompt engineering<br>2. Search:<br>-Intro to vector search with embedding<br>3. GPT Models:<br>- Intro to GPT-4<br>- GPT-35-Turbo in-depth.|
104104

105+
## Scenario-specific guidance
106+
107+
While the principles of prompt engineering can be generalized across many different model types, certain models expect a specialized prompt structure. For Azure OpenAI GPT models, there are currently two distinct APIs where prompt engineering comes into play:
108+
109+
- Chat Completion API.
110+
- Completion API.
111+
112+
Each API requires input data to be formatted differently, which in turn impacts overall prompt design. The **Chat Completion API** supports the GPT-35-Turbo and GPT-4 models. These models are designed to take input formatted in a [specific chat-like transcript](../how-to/chatgpt.md) stored inside an array of dictionaries.
113+
114+
The **Completion API** supports the older GPT-3 models and has much more flexible input requirements in that it takes a string of text with no specific format rules.
115+
116+
The techniques in this section will teach you strategies for increasing the accuracy and grounding of responses you generate with a Large Language Model (LLM). It is, however, important to remember that even when using prompt engineering effectively you still need to validate the responses the models generate. Just because a carefully crafted prompt worked well for a particular scenario doesn't necessarily mean it will generalize more broadly to certain use cases. Understanding the [limitations of LLMs](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context#limitations), is just as important as understanding how to leverage their strengths.
117+
118+
#### [Chat completion APIs](#tab/chat)
119+
120+
[!INCLUDE [Prompt Chat Completion](../includes/prompt-chat-completion.md)]
121+
122+
#### [Completion APIs](#tab/completion)
123+
124+
[!INCLUDE [Prompt Completion](../includes/prompt-completion.md)]
125+
126+
---
127+
105128
## Best practices
106129

107130
- **Be Specific**. Leave as little to interpretation as possible. Restrict the operational space.
@@ -118,8 +141,10 @@ While the input size increases with each new generation of GPT models, there wil
118141

119142
Given this limited space, it is important to use it as efficiently as possible.
120143
- Tables – As shown in the examples in the previous section, GPT models can understand tabular formatted data quite easily. This can be a space efficient way to include data, rather than preceding every field with name (such as with JSON).
121-
- White Space – Consecutive whitespaces are treated as separate tokens which can be an easy way to waste space. Spaces preceding a word, on the other hand, are typically treated as part of the same token as the word. Carefully watch your usage of whitespace and don’t use punctuation when a space alone will do.
144+
- White Space – Consecutive whitespaces are treated as separate tokens which can be an easy way to waste space. Spaces preceding a word, on the other hand, are typically treated as part of the same token as the word. Carefully watch your usage of whitespace and don’t use punctuation when a space alone will do.
122145

123-
## Next steps
146+
## Related content
124147

125-
[Learn more about Azure OpenAI.](../overview.md)
148+
* [Learn more about Azure OpenAI](../overview.md).
149+
* Get started with the ChatGPT model with [the ChatGPT quickstart](../chatgpt-quickstart.md).
150+
* For more examples, check out the [Azure OpenAI Samples GitHub repository](https://github.com/Azure/openai-samples)

0 commit comments

Comments
 (0)