You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/concept/choose-model-feature.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ ms.author: lajanuar
15
15
16
16
Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your applications and optimize your workflows. Selecting the right model is essential to ensure the success of your enterprise. In this article, we explore the available Document Intelligence models and provide guidance for how to choose the best solution for your projects.
The following decision charts highlight the features of each supported model to help you choose the model that best meets the needs and requirements of your application.
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ Once you gather a set of forms or documents for training, you need to upload it
43
43
44
44
* Once you gather and upload your training dataset, you're ready to train your custom model. In the following video, we create a project and explore some of the fundamentals for successfully labeling and training a model.
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/train/custom-label-tips.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ This article highlights the best methods for labeling custom model datasets in t
36
36
37
37
* We examine best practices for labeling your selected documents. With semantically relevant and consistent labeling, you should see an improvement in model performance.
@@ -23,27 +23,30 @@ An AI system includes not only the technology, but also the people who will use
23
23
24
24
Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/en-us/ai/responsible-ai).
25
25
26
-
## The basics of Azure AI Foundry safety evaluations
26
+
## The basics of Azure AI Foundry risk and safety evaluations (preview)
27
27
28
28
### Introduction
29
29
30
-
The Azure AI Foundry portal safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft's commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
30
+
The Azure AI Foundry risk and safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, direct and indirect jailbreak vulnerability, and protected material in content. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft's commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
31
31
32
32
### Key terms
33
33
34
-
-**Hateful and unfair content** refers to any language pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, personal appearance, and body size. Unfairness occurs when AI systems treat or represent social groups inequitably, creating or contributing to societal inequities.
35
-
-**Sexual content** includes language pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
36
-
-**Violent content** includes language pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
37
-
-**Self-harm-related content** includes language pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
38
-
-**Jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a 'DAN' (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.
34
+
-**Hateful and unfair content (for text and images)** refers to any language or imagery pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, personal appearance, and body size. Unfairness occurs when AI systems treat or represent social groups inequitably, creating or contributing to societal inequities.
35
+
-**Sexual content (for text and images)** includes language or imagery pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
36
+
-**Violent content (for text and images)** includes language or imagery pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
37
+
-**Self-harm-related content (for text and images)** includes language or imagery pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
38
+
-**Protected material content (for text)** known textual content, for example, song lyrics, articles, recipes, and selected web content, that might be output by large language models. By detecting and preventing the display of protected material, organizations can maintain compliance with intellectual property rights and preserve content originality.
39
+
-**Protected material content (for images)** refers to certain protected visual content, that is protected by copyright such as logos and brands, artworks, or fictional characters. The system uses an image-to-text foundation model to identify whether such content is present.
40
+
-**Direct jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a 'DAN' (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.
41
+
-**Indirect jailbreak** indirect prompt attacks or cross-domain prompt injection attacks, refers to when malicious instructions are hidden within data that an AI system processes or generates grounded content from. This data can include emails, documents, websites, or other sources not directly authored by the developer or user and can lead to inappropriate content generation or ignoring system-imposed restrictions.
39
42
-**Defect rate (content risk)** is defined as the percentage of instances in your test dataset that surpass a threshold on the severity scale over the whole dataset size.
40
43
-**Red-teaming** has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of Large Language Models (LLM), the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hateful speech, incitement or glorification of violence, reference to self-harm-related content or sexual content.
41
44
42
45
## Capabilities
43
46
44
47
### System behavior
45
48
46
-
Azure AI Foundry provisions an Azure OpenAI GPT-4 model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4 model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) and reasoning for the AI-generated label.
49
+
Azure AI Foundry provisions a fine-tuned Azure OpenAI GPT-4o model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4o model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) or content risk detection label (True or False) and reasoning for the AI-generated label.
47
50
48
51
### Use cases
49
52
@@ -88,7 +91,7 @@ We encourage customers to leverage Azure AI Foundry safety evaluations in their
88
91
89
92
### Evaluation methods
90
93
91
-
For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations' automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator's guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
94
+
For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations' automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts, 250 single-turn text-to-image generations, and 250 multi-modal text with image-to-text generations. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator's guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/flow-develop.md
+14-3Lines changed: 14 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,8 @@ ms.custom:
9
9
- build-2024
10
10
- ignite-2024
11
11
ms.topic: how-to
12
-
ms.date: 11/08/2024
13
-
ms.reviewer: jinzhong
12
+
ms.date: 01/10/2025
13
+
ms.reviewer: gmuthukumar
14
14
ms.author: lagayhar
15
15
author: lgayhardt
16
16
---
@@ -34,14 +34,25 @@ In this article, you learn how to create and develop your first prompt flow in A
34
34
- If you don't have an Azure AI Foundry project already, first [create a project](create-projects.md).
35
35
- Prompt flow requires a compute session. If you don't have a runtime, you can [create one in Azure AI Foundry portal](./create-manage-compute-session.md).
36
36
- You need a deployed model.
37
-
37
+
- In your project, configure access control for the blog storage account. Assign the **Storage Blob Data Contributor** role to your user account.
38
+
- In the bottom left of the Azure AI Foundry portal, select **Management center**.
39
+
- In **Connected resources** for your project, select the link that corresponds to the **Azure Blob Storage** type.
40
+
- Select **View in Azure Portal**
41
+
- In the Azure portal, select **Access control (IAM)**.
42
+
- Select **Add>Add role assignment**.
43
+
- Search for **Storage Blob Data Contributor**, then select it.
44
+
- Use the **Add role assignment** page to add yourself as a member.
45
+
- Select **Review + assign** to review the assignment.
46
+
- Select **Review + assign** to assign the role.
47
+
38
48
## Create and develop your Prompt flow
39
49
40
50
You can create a flow by either cloning the samples available in the gallery or creating a flow from scratch. If you already have flow files in local or file share, you can also import the files to create a flow.
41
51
42
52
To create a prompt flow from the gallery in Azure AI Foundry portal:
43
53
44
54
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project.
55
+
1. If you're in the Management center, select **Go to project** to return to your project.
45
56
1. From the collapsible left menu, select **Prompt flow**.
46
57
1. Select **+ Create**.
47
58
1. In the **Standard flow** tile, select **Create**.
@@ -72,7 +72,7 @@ Let's change the script to take input from a client application and generate a s
72
72
73
73
1. Now define a `get_chat_response` function that takes messages and context, generates a system message using a prompt template, and calls a model. Add this code to your existing **chat.py** file:
@@ -81,7 +81,7 @@ Let's change the script to take input from a client application and generate a s
81
81
82
82
1. Now simulate passing information from a frontend application to this function. Add the following code to the end of your **chat.py** file. Feel free to play with the message and add your own name.
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/concept-automated-ml.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -197,7 +197,7 @@ You can also inspect the logged job information, which [contains metrics](../how
197
197
198
198
While model building is automated, you can also [learn how important or relevant features are](../how-to-configure-auto-train.md) to the generated models.
0 commit comments