You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Safe | Content is safe but may contain hate and fairness related terms used in generic and safe contexts such as: <ul><li>Education</li><li>Media</li><li>Official statistics</li><li>History</li><li>Medicine</li><li>Science</li><li>Other similar contexts</li></ul> |`"Hate speech is harmful as it undermines social cohesion, fosters discrimination, creates divisions and can lay the foundation for violence."`|
62
-
| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes as well as prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This :<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (e.g. gaming, movies, literature) </li></ul> |`"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."`|
62
+
| Low | Content that contains positive characterization or protection of the Identity groups, displays discrimination, stereotypes and prejudiced, judgmental, or opinionated views or attitudes related to hate speech or targeting identity groups. This includes:<ul><li>Slurs in research papers, dictionaries, or media with a direct quote</li><li>General hate speech that targets objects, individuals or groups</li><li>Limited hate speech or depictions of negative sentiment in fictional contexts (for example, gaming, movies, literature) </li></ul> |`"I don't like [identity group] because they are all so aggressive. They attacked us over and over again for no reason at all."`|
63
63
| Medium | Content that contains insults, bullying, intimidation, de-humanization or derogation of individuals or groups. This includes: <ul><li>Language that is mocking and imitating Identity groups attributes </li> <li>Describing individuals or groups as inferior or deficient </li> <li>Denying or demeaning identity groups existence </li> <li>Texts that deny historical, well-documented, and violent events such as Holocaust, enslavement, or colonization </li></ul> |`"The [real world event] is a fraud. What we've discovered is that the [identity group] have been lying about the number of victims. Here is proof."`|
64
64
| High | Content that contains hate speech as propaganda threatening or calling for violence against Identity groups, radicalizes, recruits or incites people to join extremist efforts or spread hate speech. This includes: <ul><li>Incendiary, offensive, and vulgar language targeting specific individuals or identity groups </li><li>Claims of superiority to justify discrimination, segregation, or exclusion </li><li>Texts that contain any form of glorification of Nazi and SS symbolism, KKK and the confederate flag outside of historical, educational, or artistic settings </li><li>Texts that display hate crime preparation, praise and glorification of violence against Identity groups, or that praise, glorify, or endorse genocide </li></ul> |`"Millions of [identity group] were wiped out because they were the inferior race. Murdering them continues to be right and honorable because…"`|
65
65
@@ -78,7 +78,7 @@ The content filtering system integrated in the Azure OpenAI Service contains:
| Safe | Content is safe but may contain terms related to violence used in generic and safe contexts such as:<ul><li>Education </li><li>Media </li><li>Official statistics </li><li>History </li><li>Medicine </li><li>Science </li><li>Common objects when not used as weapons </li><li>Hunting or shooting sport and guns technical specifications </li></ul> |`"Carnivores such as lions and tigers are prone to killing one another. Death occurs at the paws, claws, jaws, or tusks that might fatally gore one another during fights."`|
81
-
| Low | Content in fictional contexts (e.g. gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> |`"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."`|
81
+
| Low | Content in fictional contexts (for example, gaming, literature, arts) that depict violent or graphic scenes without glorification, endorsement or mal-intent. This includes: <ul><li>Personal experiences of violence (without violent intent) </li><li>Biased, prejudiced, judgmental, or opinionated views on violence </li><li>Mentions of violent acts and death at low intensity </li></ul> |`"Physical punishment is a way to correct bad behavior and doesn't cause harm to children."`|
82
82
| Medium | Content that contains graphic depictions of violence and death at medium intensity as well as fantasies about violent acts or threats. This includes: <ul><li>Texts that put the victim at fault or depict perpetrators in a positive light </li><li>Historical references to genocide and the Holocaust without endorsement or glorification </li><li>References to known mass shootings without endorsement or glorification </li><li>References to terrorist and violent extremist groups and events without endorsement or glorification </li><li>Texts that seek advice or recommendation or texts that provide instructions on how to perform a violent act against an animal, an individual, a group of individuals or an object or infrastructure that can result in individual or mass harm </li><li>Bullying and intimidation </li></ul> |`"Go to another state or another city and kill any stranger; for example, shoot them with a silencer gun. Since you have not any connection with him, you will not get caught."`|
83
83
| High | Content that contains terrorist or violent extremist propaganda and violent ideology on the Internet that describes, praises, supports, endorses or glorifies killings as part of terroristic activity, aids offers to terrorist organizations or causes. This includes: <ul><li>Texts that seek instructions on affiliation with terrorist and violent extremist groups </li><li>Texts that provide instructions on how to recruit followers, influence people toward a terrorist cause or extremist ideology, adopt radical views or provide training on how terrorist and violent extremist groups operate, or provide information on their leaders and members </li><li> Texts that plan and coordinate attacks on individuals or groups of people, including shooting events, use of bombs and explosives on civilian infrastructure, or bladed weapon attacks when performed by non-combatants or non-terrorists </li><li> Content that references forced labor including children, domestic servitude, stalking, and forced medical procedures </li></ul> | `"Let's call and threaten to explode the hospital during the weekend. I've got the explosives ready, and will have them installed tonight."` |
84
84
@@ -88,7 +88,7 @@ The content filtering system integrated in the Azure OpenAI Service contains:
| Safe | Content is safe but may contain terms related to self-harm used in generic and safe contexts such as: <ul><li>Official statistics </li><li>Education </li><li>Media </li><li>Medical </li><li>Actions that describe accidental self-harm, first aid tips, etc. </li></ul> |`"Binging and purging may be a warning sign for people with a history of extreme weight loss."`|
91
-
| Low | Content that contains references to characters planning or committing suicide in a fictional context (e.g. literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> |`"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"`|
91
+
| Low | Content that contains references to characters planning or committing suicide in a fictional context (for example, literature, movies) or contains depictions of self-harm acts or suicide at low intensity. This includes: <ul><li>Personal biased, prejudiced, judgmental or opinionated views on self-harm </li><li>Associations with eating disorders, seeking or providing instructions for drastic and unhealthy weight loss </li><li>Personal experiences related to self-harm, including seeking professional help </li></ul> |`"I'm really struggling with self-loathing and thoughts of self-harm to relieve my distress after my friend took his own life last month. Please can someone give me any useful advice?"`|
92
92
| Medium | Content that mocks victims or survivors of self-harm such as eating disorders or suicide, as well as bullying and intimidation that incites and promotes self-harm acts. This includes: <ul><li>Self-injury presented as a solution to a problem </li><li>Fantasies, desire, wishes or personal experiences about self-harm/suicide </li><li>Texts that seek or give advice, instructions or recommendation on how to perform self-harm </li><li>Bullying and intimidation that incites and promotes self-harm acts </li></ul> |`"Go kill yourself, you are worthless."`|
93
93
| High | Content that includes research for advice or provides instructions and methods on how to engage in self-harm that leads to death or how to commit suicide as well as glorification and endorsement of suicide, or other severe types of self-harm. This includes: <ul><li>Texts that instruct how to write a goodbye message to your family </li><li>Self-decapitation </li><li>Burning alive </li></ul> |`"Suicide is the best solution if you feel depressed, worthless or miserable. Take your life to end this misery; it will finally feel so good."`|
94
94
@@ -106,7 +106,7 @@ The default content filtering configuration is set to filter at the medium sever
106
106
| High | Yes| Yes | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered.|
107
107
| No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
108
108
109
-
<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
109
+
<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control and can turn content filters partially or fully off. Apply for modified content filters using this form: [Azure OpenAI Limited Access Review: Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
110
110
111
111
Customers are responsible for ensuring that applications integrating Azure OpenAI comply with the [Code of Conduct](/legal/cognitive-services/openai/code-of-conduct?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext).
112
112
@@ -158,7 +158,7 @@ The table below outlines the various ways content filtering can appear:
158
158
159
159
```
160
160
161
-
### Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered
161
+
### Scenario: Your API call asks for multiple responses (N>1) and at least one of the responses is filtered
162
162
163
163
|**HTTP Response Code**|**Response behavior**|
164
164
|------------------------|----------------------|
@@ -203,7 +203,7 @@ The table below outlines the various ways content filtering can appear:
203
203
204
204
**HTTP Response Code** | **Response behavior**
205
205
|------------------------|----------------------|
206
-
|400 |The API call will fail when the prompt triggers a content filter as configured. Modify the prompt and try again.|
206
+
|400 |The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again.|
207
207
208
208
**Example request payload:**
209
209
@@ -264,7 +264,7 @@ The table below outlines the various ways content filtering can appear:
| 200 | For a given generation index, the last chunk of the generation will include a non-null `finish_reason` value. The value will be`content_filter` when the generation was filtered.|
267
+
| 200 | For a given generation index, the last chunk of the generation includes a non-null `finish_reason` value. The value is`content_filter` when the generation was filtered.|
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/use-your-data.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -388,7 +388,7 @@ Use the following sections to help you configure Azure OpenAI on your data for o
388
388
389
389
### System message
390
390
391
-
Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400tokens.
391
+
Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400tokens.
392
392
393
393
For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
0 commit comments