Skip to content

Commit 6853e1b

Browse files
committed
update
1 parent 8f024f3 commit 6853e1b

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

articles/ai-services/openai/concepts/system-message.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ Here are some examples of lines you can include:
6969

7070
## Provide examples to demonstrate the intended behavior of the model
7171

72-
When using the system message to demonstrate the intended behavior of the model in your scenario, it is helpful to provide specific examples. When providing examples, consider the following:
72+
When using the system message to demonstrate the intended behavior of the model in your scenario, it's helpful to provide specific examples. When providing examples, consider the following:
7373

7474
- **Describe difficult use cases** where the prompt is ambiguous or complicated, to give the model more visibility into how to approach such cases.
7575

@@ -166,7 +166,7 @@ Here are some examples of lines you can include to potentially mitigate differen
166166

167167
Indirect attacks, also referred to as Indirect Prompt Attacks, or Cross Domain Prompt Injection Attacks, are a type of prompt injection technique where malicious instructions are hidden in the ancillary documents that are fed into Generative AI Models. We’ve found system messages to be an effective mitigation for these attacks, by way of spotlighting.
168168

169-
**Spotlighting** is a family of techniques that helps large language models (LLMs) distinguish between valid system instructions and potentially untrustworthy external inputs. It is based on the idea of transforming the input text in a way that makes it more salient to the model, while preserving its semantic content and task performance.
169+
**Spotlighting** is a family of techniques that helps large language models (LLMs) distinguish between valid system instructions and potentially untrustworthy external inputs. It's based on the idea of transforming the input text in a way that makes it more salient to the model, while preserving its semantic content and task performance.
170170

171171
- **Delimiters** are a natural starting point to help mitigate indirect attacks. Including delimiters in your system message helps to explicitly demarcate the location of the input text in the system message. You can choose one or more special tokens to prepend and append the input text, and the model will be made aware of this boundary. By using delimiters, the model will only handle documents if they contain the appropriate delimiters, which reduces the success rate of indirect attacks. However, since delimiters can be subverted by clever adversaries, we recommend you continue on to the other spotlighting approaches.
172172

@@ -182,7 +182,7 @@ Below is an example of a potential system message, for a retail company deployin
182182

183183
:::image type="content" source="../media/concepts/system-message/template.png" alt-text="Screenshot of metaprompts influencing a chatbot conversation." lightbox="../media/concepts/system-message/template.png":::
184184

185-
Finally, remember that system messages, or metaprompts, are not "one size fits all." Use of these type of examples has varying degrees of success in different applications. It is important to try different wording, ordering, and structure of system message text to reduce identified harms, and to test the variations to see what works best for a given scenario.
185+
Finally, remember that system messages, or metaprompts, are not "one size fits all." Use of these type of examples has varying degrees of success in different applications. It's important to try different wording, ordering, and structure of system message text to reduce identified harms, and to test the variations to see what works best for a given scenario.
186186

187187
## Next steps
188188

articles/ai-services/openai/how-to/reproducible-output.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 07/19/2024
9+
ms.date: 09/20/2024
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -15,7 +15,7 @@ recommendations: false
1515

1616
# Learn how to use reproducible output (preview)
1717

18-
By default if you ask an Azure OpenAI Chat Completion model the same question multiple times you're likely to get a different response. The responses are therefore considered to be non-deterministic. Reproducible output is a new preview feature that allows you to selectively change the default behavior to help product more deterministic outputs.
18+
By default if you ask an Azure OpenAI Chat Completion model the same question multiple times you're likely to get a different response. The responses are therefore considered to be nondeterministic. Reproducible output is a new preview feature that allows you to selectively change the default behavior to help product more deterministic outputs.
1919

2020
## Reproducible output support
2121

0 commit comments

Comments
 (0)