You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/ai-resources.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -84,7 +84,7 @@ With the same API key, you can access all of the following Azure AI services:
84
84
|[Speech](../../ai-services/speech-service/index.yml)| Speech to text, text to speech, translation and speaker recognition |
85
85
|[Vision](../../ai-services/computer-vision/index.yml)| Analyze content in images and videos |
86
86
87
-
Large language models that can be used to generate text, speech, images, and more, are hosted by the Azure AI hub resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog.md) are always created in the project context for isolation.
87
+
Large language models that can be used to generate text, speech, images, and more, are hosted by the Azure AI hub resource. Fine-tuned models and open models deployed from the [model catalog](../how-to/model-catalog-overview.md) are always created in the project context for isolation.
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/content-filtering.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ This system is powered by [Azure AI Content Safety](../../ai-services/content-sa
26
26
27
27
The content filtering models have been trained and tested on the following languages: English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese. However, the service can work in many other languages, but the quality can vary. In all cases, you should do your own testing to ensure that it works for your application.
28
28
29
-
You can create a content filter or use the default content filter for Azure OpenAI model deployment, and can also use a default content filter for other text models curated by Azure AI in the [model catalog](../how-to/model-catalog.md). The custom content filters for those models aren't yet available. Models available through Models as a Service have content filtering enabled by default and can't be configured.
29
+
You can create a content filter or use the default content filter for Azure OpenAI model deployment, and can also use a default content filter for other text models curated by Azure AI in the [model catalog](../how-to/model-catalog-overview.md). The custom content filters for those models aren't yet available. Models available through Models as a Service have content filtering enabled by default and can't be configured.
30
30
31
31
## How to create a content filter?
32
32
For any model deployment in [Azure AI Studio](https://ai.azure.com), you could directly use the default content filter, but when you want to have more customized setting on content filter, for example set a stricter or looser filter, or enable more advanced capabilities, like jailbreak risk detection and protected material detection. To create a content filter, you could go to **Build**, choose one of your projects, then select **Content filters** in the left navigation bar, and create a content filter.
@@ -44,9 +44,9 @@ The content filtering system integrated in Azure AI Studio contains neural multi
44
44
|Category|Description|
45
45
|--------|-----------|
46
46
| Hate |The hate category describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
47
-
| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one’s will, prostitution, pornography, and abuse. |
47
+
| Sexual | The sexual category describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one's will, prostitution, pornography, and abuse. |
48
48
| Violence | The violence category describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, etc. |
49
-
| Self-Harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage one’s body, or kill oneself.|
49
+
| Self-Harm | The self-harm category describes language related to physical actions intended to purposely hurt, injure, or damage one's body, or kill oneself.|
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/deployments-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ You often hear this interaction with a model referred to as "inferencing". Infer
25
25
26
26
First you might ask:
27
27
- "What models can I deploy?" Azure AI Studio supports deploying some of the most popular large language and vision foundation models curated by Microsoft, Hugging Face, and Meta.
28
-
- "How do I choose the right model?" Azure AI Studio provides a [model catalog](../how-to/model-catalog.md) that allows you to search and filter models based on your use case. You can also test a model on a sample playground before deploying it to your project.
28
+
- "How do I choose the right model?" Azure AI Studio provides a [model catalog](../how-to/model-catalog-overview.md) that allows you to search and filter models based on your use case. You can also test a model on a sample playground before deploying it to your project.
29
29
- "From where in Azure AI Studio can I deploy a model?" You can deploy a model from the model catalog or from your project's deployment page.
30
30
31
31
Azure AI Studio simplifies deployments. A simple select or a line of code deploys a model and generate an API endpoint for your applications to consume.
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/evaluation-improvement-strategies.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,17 +23,17 @@ Mitigating content risks and poor quality generations presented by large languag
23
23
24
24
## Model layer
25
25
26
-
At the model level, it's important to understand the models you'll be use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially risky uses and outcomes. For example, we have collaborated with OpenAI on using techniques such as Reinforcement learning from human feedback (RLHF) and fine-tuning in the base models to build safety into the model itself, and you see safety built into the model to mitigate unwanted behaviors.
26
+
At the model level, it's important to understand the models you'll use and what fine-tuning steps might have been taken by the model developers to align the model towards its intended uses and to reduce the risk of potentially risky uses and outcomes. For example, we have collaborated with OpenAI on using techniques such as Reinforcement learning from human feedback (RLHF) and fine-tuning in the base models to build safety into the model itself, and you see safety built into the model to mitigate unwanted behaviors.
27
27
28
-
Besides these enhancements, Azure AI Studio also offers model catalog that enables you to better understand each model’s capabilities before you even start building your AI applications. You can explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog.md), you can explore model cards to understand model capabilities and limitations, and any safety fine-tuning performed. You can further run sample inferences to see how a model’s responds to typical prompts for a specific use case and experiment with sample inferences.
28
+
Besides these enhancements, Azure AI Studio also offers a model catalog that enables you to better understand the capabilities of each model before you even start building your AI applications. You can explore models from Azure OpenAI Service, Meta, etc., organized by collection and task. In the [model catalog](../how-to/model-catalog-overview.md), you can explore model cards to understand model capabilities and limitations and any safety fine-tuning performed. You can further run sample inferences to see how a model responds to typical prompts for a specific use case and experiment with sample inferences.
29
29
30
-
The model catalog also provides model benchmarks to help users compare each model’s accuracy using public datasets.
30
+
The model catalog also provides model benchmarks to help users compare each model's accuracy using public datasets.
31
31
32
32
The catalog has over 1,600 models today, including leading models from OpenAI, Mistral, Meta, Hugging Face, and Microsoft.
33
33
34
34
## Safety systems layer
35
35
36
-
Choosing a great base model is just the first step. For most AI applications, it’s not enough to rely on the safety mitigations built into the model itself. Even with fine-tuning, LLMs can make mistakes and are susceptible to attacks such as jailbreaks. In many applications at Microsoft, we use another AI-based safety system, [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety/), to provide an independent layer of protection, helping you to block the output of risky content. Azure AI Content Safety is a content moderation offering that goes around the model and monitors the inputs and outputs to help identify and prevent attacks from being successful and catches places where the models make a mistake.
36
+
Choosing a great base model is just the first step. For most AI applications, it's not enough to rely on the safety mitigations built into the model itself. Even with fine-tuning, LLMs can make mistakes and are susceptible to attacks such as jailbreaks. In many applications at Microsoft, we use another AI-based safety system, [Azure AI Content Safety](https://azure.microsoft.com/products/ai-services/ai-content-safety/), to provide an independent layer of protection, helping you to block the output of risky content. Azure AI Content Safety is a content moderation offering that goes around the model and monitors the inputs and outputs to help identify and prevent attacks from being successful and catches places where the models make a mistake.
37
37
38
38
When you deploy your model through the model catalog or deploy your LLM applications to an endpoint, you can use [Azure AI Content Safety](../concepts/content-filtering.md). This safety system works by running both the prompt and completion for your model through an ensemble of classification models aimed at detecting and preventing the output of harmful content across a range of [categories](/azure/ai-services/content-safety/concepts/harm-categories):
39
39
@@ -46,31 +46,31 @@ The default configuration is set to filter risky content at the medium severity
46
46
47
47
## Metaprompt and grounding layer
48
48
49
-
System message (otherwise known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application’s unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation](./retrieval-augmented-generation.md) (RAG) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
49
+
System message (otherwise known as metaprompt) design and proper data grounding are at the heart of every generative AI application. They provide an application's unique differentiation and are also a key component in reducing errors and mitigating risks. At Microsoft, we find [retrieval augmented generation](./retrieval-augmented-generation.md) (RAG) to be an effective and flexible architecture. With RAG, you enable your application to retrieve relevant knowledge from selected data and incorporate it into your system message to the model. In this pattern, rather than using the model to store information, which can change over time and based on context, the model functions as a reasoning engine over the data provided to it during the query. This improves the freshness, accuracy, and relevancy of inputs and outputs. In other words, RAG can ground your model in relevant data for more relevant results.
50
50
51
-
Now the other part of the story is how you teach the base model to use that data or to answer the questions effectively in your application. When you create a system message, you’re giving instructions to the model in natural language to consistently guide its behavior on the backend. Tapping into the trained data of the models is valuable but enhancing it with your information is critical.
51
+
Now the other part of the story is how you teach the base model to use that data or to answer the questions effectively in your application. When you create a system message, you're giving instructions to the model in natural language to consistently guide its behavior on the backend. Tapping into the trained data of the models is valuable but enhancing it with your information is critical.
52
52
53
-
Here’s what a system message should look like. You must:
53
+
Here's what a system message should look like. You must:
54
54
55
-
- Define the model’s profile, capabilities, and limitations for your scenario.
56
-
- Define the model’s output format.
55
+
- Define the model's profile, capabilities, and limitations for your scenario.
56
+
- Define the model's output format.
57
57
- Provide examples to demonstrate the intended behavior of the model.
58
58
- Provide additional behavioral guardrails.
59
59
60
60
Recommended System Message Framework:
61
61
62
-
- Define the model’s profile, capabilities, and limitations for your scenario.
63
-
-**Define the specific task(s)** you would like the model to complete. Describe who the end users will be, what inputs will be provided to the model, and what you expect the model to output.
64
-
-**Define how the model should complete the task**, including any additional tools (like APIs, code, plug-ins) the model can use.
62
+
- Define the model's profile, capabilities, and limitations for your scenario.
63
+
-**Define the specific task(s)** you would like the model to complete. Describe who the end users are, what inputs are provided to the model, and what you expect the model to output.
64
+
-**Define how the model should complete the task**, including any extra tools (like APIs, code, plug-ins) the model can use.
65
65
-**Define the scope and limitations** of the model's performance by providing clear instructions.
66
66
-**Define the posture and tone** the model should exhibit in its responses.
67
-
- Define the model’s output format.
67
+
- Define the model's output format.
68
68
-**Define the language and syntax** of the output format. For example, if you want the output to be machine parse-able, you may want tot structure the output to be in JSON, XSON orXML.
69
69
-**Define any styling or formatting** preferences for better user readability like bulleting or bolding certain parts of the response
70
70
- Provide examples to demonstrate the intended behavior of the model
71
-
-**Describe difficult use cases** where the prompt is ambiguous or complicated, to give the model additional visibility into how to approach such cases.
71
+
-**Describe difficult use cases** where the prompt is ambiguous or complicated, to give the model more visibility into how to approach such cases.
72
72
-**Show chain-of-thought** reasoning to better inform the model on the steps it should take to achieve the desired outcomes.
73
-
- Provide additional behavioral guardrails
73
+
- Provide more behavioral guardrails
74
74
-**Define specific behaviors and safety mitigations** to mitigate risks that have been identified and prioritized for the scenario.
75
75
76
76
Here we outline a set of best practices instructions you can use to augment your task-based system message instructions to minimize different content risks:
@@ -91,7 +91,7 @@ Here we outline a set of best practices instructions you can use to augment your
91
91
### Sample system message instructions for ungrounded answers
92
92
93
93
```
94
-
- Your answer **must not** include any speculation or inference about the background of the document or the user’s gender, ancestry, roles, positions, etc.
94
+
- Your answer **must not** include any speculation or inference about the background of the document or the user's gender, ancestry, roles, positions, etc.
95
95
- You **must not** assume or change dates and times.
96
96
- You **must always** perform searches on [insert relevant documents that your feature can search on] when the user is seeking information (explicitly or implicitly), regardless of internal knowledge or information.
0 commit comments