Skip to content

Commit 70337bc

Browse files
authored
Merge pull request #2256 from MicrosoftDocs/main
Publish to live, Sunday 4 AM PST, 1/12
2 parents 28bc1fd + 3d52dcf commit 70337bc

File tree

10 files changed

+38
-30
lines changed

10 files changed

+38
-30
lines changed

.openpublishing.publish.config.json

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -110,12 +110,6 @@
110110
"branch": "main",
111111
"branch_mapping": {}
112112
},
113-
{
114-
"path_to_root": "azureai-samples-temp",
115-
"url": "https://github.com/sdgilley/azureai-samples",
116-
"branch": "sdg-add-quickstart",
117-
"branch_mapping": {}
118-
},
119113
{
120114
"path_to_root": "azureai-samples-csharp",
121115
"url": "https://github.com/Azure-Samples/azureai-samples",

articles/ai-services/document-intelligence/concept/choose-model-feature.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.author: lajanuar
1515

1616
Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your applications and optimize your workflows. Selecting the right model is essential to ensure the success of your enterprise. In this article, we explore the available Document Intelligence models and provide guidance for how to choose the best solution for your projects.
1717

18-
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1b]
18+
> [!VIDEO 364078d4-14bc-4b16-995a-526db31ea1ee]
1919
2020
The following decision charts highlight the features of each supported model to help you choose the model that best meets the needs and requirements of your application.
2121

articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Once you gather a set of forms or documents for training, you need to upload it
4343

4444
* Once you gather and upload your training dataset, you're ready to train your custom model. In the following video, we create a project and explore some of the fundamentals for successfully labeling and training a model.
4545

46-
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fX1c]
46+
> [!VIDEO b716cdc7-3c23-4c69-a2ef-e131166f792b]
4747
4848
## Create a project in the Document Intelligence Studio
4949

articles/ai-services/document-intelligence/train/custom-label-tips.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ This article highlights the best methods for labeling custom model datasets in t
3636

3737
* We examine best practices for labeling your selected documents. With semantically relevant and consistent labeling, you should see an improvement in model performance.
3838

39-
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5fZKB]
39+
> [!VIDEO cae72200-eeca-4897-8ca7-2e91696cac83]
4040
4141
### Search
4242

articles/ai-studio/concepts/safety-evaluations-transparency-note.md

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,19 @@
11
---
2-
title: Transparency Note for Azure AI Foundry safety evaluations
2+
title: Azure AI Foundry risk and safety evaluations (preview) Transparency Note
33
titleSuffix: Azure AI Foundry
44
description: Azure AI Foundry safety evaluations intended purpose, capabilities, limitations and how to achieve the best performance.
55
manager: scottpolly
66
ms.service: azure-ai-studio
77
ms.custom:
88
- build-2024
99
ms.topic: article
10-
ms.date: 11/21/2024
10+
ms.date: 01/10/2025
1111
ms.reviewer: mithigpe
1212
ms.author: lagayhar
1313
author: lgayhardt
1414
---
1515

16-
# Transparency Note for Azure AI Foundry safety evaluations
16+
# Azure AI Foundry risk and safety evaluations (preview) Transparency Note
1717

1818
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
1919

@@ -23,27 +23,30 @@ An AI system includes not only the technology, but also the people who will use
2323

2424
Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/en-us/ai/responsible-ai).
2525

26-
## The basics of Azure AI Foundry safety evaluations
26+
## The basics of Azure AI Foundry risk and safety evaluations (preview)
2727

2828
### Introduction
2929

30-
The Azure AI Foundry portal safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, jailbreak vulnerability. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft's commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
30+
The Azure AI Foundry risk and safety evaluations let users evaluate the output of their generative AI application for textual content risks: hateful and unfair content, sexual content, violent content, self-harm-related content, direct and indirect jailbreak vulnerability, and protected material in content. Safety evaluations can also help generate adversarial datasets to help you accelerate and augment the red-teaming operation. Azure AI Foundry safety evaluations reflect Microsoft's commitments to ensure AI systems are built safely and responsibly, operationalizing our Responsible AI principles.
3131

3232
### Key terms
3333

34-
- **Hateful and unfair content** refers to any language pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, personal appearance, and body size. Unfairness occurs when AI systems treat or represent social groups inequitably, creating or contributing to societal inequities.
35-
- **Sexual content** includes language pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
36-
- **Violent content** includes language pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
37-
- **Self-harm-related content** includes language pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
38-
- **Jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a 'DAN' (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.
34+
- **Hateful and unfair content (for text and images)** refers to any language or imagery pertaining to hate toward or unfair representations of individuals and social groups along factors including but not limited to race, ethnicity, nationality, gender, sexual orientation, religion, immigration status, ability, personal appearance, and body size. Unfairness occurs when AI systems treat or represent social groups inequitably, creating or contributing to societal inequities.
35+
- **Sexual content (for text and images)** includes language or imagery pertaining to anatomical organs and genitals, romantic relationships, acts portrayed in erotic terms, pregnancy, physical sexual acts (including assault or sexual violence), prostitution, pornography, and sexual abuse.
36+
- **Violent content (for text and images)** includes language or imagery pertaining to physical actions intended to hurt, injure, damage, or kill someone or something. It also includes descriptions of weapons and guns (and related entities such as manufacturers and associations).
37+
- **Self-harm-related content (for text and images)** includes language or imagery pertaining to actions intended to hurt, injure, or damage one's body or kill oneself.
38+
- **Protected material content (for text)** known textual content, for example, song lyrics, articles, recipes, and selected web content, that might be output by large language models. By detecting and preventing the display of protected material, organizations can maintain compliance with intellectual property rights and preserve content originality.
39+
- **Protected material content (for images)** refers to certain protected visual content, that is protected by copyright such as logos and brands, artworks, or fictional characters. The system uses an image-to-text foundation model to identify whether such content is present.
40+
- **Direct jailbreak**, direct prompt attacks, or user prompt injection attacks, refer to users manipulating prompts to inject harmful inputs into LLMs to distort actions and outputs. An example of a jailbreak command is a 'DAN' (Do Anything Now) attack, which can trick the LLM into inappropriate content generation or ignoring system-imposed restrictions.
41+
- **Indirect jailbreak** indirect prompt attacks or cross-domain prompt injection attacks, refers to when malicious instructions are hidden within data that an AI system processes or generates grounded content from. This data can include emails, documents, websites, or other sources not directly authored by the developer or user and can lead to inappropriate content generation or ignoring system-imposed restrictions.
3942
- **Defect rate (content risk)** is defined as the percentage of instances in your test dataset that surpass a threshold on the severity scale over the whole dataset size.
4043
- **Red-teaming** has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of Large Language Models (LLM), the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hateful speech, incitement or glorification of violence, reference to self-harm-related content or sexual content.
4144

4245
## Capabilities
4346

4447
### System behavior
4548

46-
Azure AI Foundry provisions an Azure OpenAI GPT-4 model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4 model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) and reasoning for the AI-generated label.
49+
Azure AI Foundry provisions a fine-tuned Azure OpenAI GPT-4o model and orchestrates adversarial attacks against your application to generate a high quality test dataset. It then provisions another GPT-4o model to annotate your test dataset for content and security. Users provide their generative AI application endpoint that they wish to test, and the safety evaluations will output a static test dataset against that endpoint along with its content risk label (Very low, Low, Medium, High) or content risk detection label (True or False) and reasoning for the AI-generated label.
4750

4851
### Use cases
4952

@@ -88,7 +91,7 @@ We encourage customers to leverage Azure AI Foundry safety evaluations in their
8891

8992
### Evaluation methods
9093

91-
For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations' automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator's guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
94+
For all supported content risk types, we have internally checked the quality by comparing the rate of approximate matches between human labelers using a 0-7 severity scale and the safety evaluations' automated annotator also using a 0-7 severity scale on the same datasets. For each risk area, we had both human labelers and an automated annotator label 500 English, single-turn texts, 250 single-turn text-to-image generations, and 250 multi-modal text with image-to-text generations. The human labelers and the automated annotator didn't use exactly the same versions of the annotation guidelines; while the automated annotator's guidelines stemmed from the guidelines for humans, they have since diverged to varying degrees (with the hate and unfairness guidelines having diverged the most). Despite these slight to moderate differences, we believe it's still useful to share general trends and insights from our comparison of approximate matches. In our comparisons, we looked for matches with a 2-level tolerance (where human label matched automated annotator label exactly or was within 2 levels above or below in severity), matches with a 1-level tolerance, and matches with a 0-level tolerance.
9295

9396
### Evaluation results
9497

articles/ai-studio/how-to/flow-deploy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- ignite-2024
1111
ms.topic: how-to
1212
ms.date: 5/21/2024
13-
ms.reviewer: likebupt
13+
ms.reviewer: gmuthukumar
1414
ms.author: lagayhar
1515
author: lgayhardt
1616
---

articles/ai-studio/how-to/flow-develop.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: how-to
12-
ms.date: 11/08/2024
13-
ms.reviewer: jinzhong
12+
ms.date: 01/10/2025
13+
ms.reviewer: gmuthukumar
1414
ms.author: lagayhar
1515
author: lgayhardt
1616
---
@@ -34,14 +34,25 @@ In this article, you learn how to create and develop your first prompt flow in A
3434
- If you don't have an Azure AI Foundry project already, first [create a project](create-projects.md).
3535
- Prompt flow requires a compute session. If you don't have a runtime, you can [create one in Azure AI Foundry portal](./create-manage-compute-session.md).
3636
- You need a deployed model.
37-
37+
- In your project, configure access control for the blog storage account. Assign the **Storage Blob Data Contributor** role to your user account.
38+
- In the bottom left of the Azure AI Foundry portal, select **Management center**.
39+
- In **Connected resources** for your project, select the link that corresponds to the **Azure Blob Storage** type.
40+
- Select **View in Azure Portal**
41+
- In the Azure portal, select **Access control (IAM)**.
42+
- Select **Add>Add role assignment**.
43+
- Search for **Storage Blob Data Contributor**, then select it.
44+
- Use the **Add role assignment** page to add yourself as a member.
45+
- Select **Review + assign** to review the assignment.
46+
- Select **Review + assign** to assign the role.
47+
3848
## Create and develop your Prompt flow
3949

4050
You can create a flow by either cloning the samples available in the gallery or creating a flow from scratch. If you already have flow files in local or file share, you can also import the files to create a flow.
4151

4252
To create a prompt flow from the gallery in Azure AI Foundry portal:
4353

4454
1. Sign in to [Azure AI Foundry](https://ai.azure.com) and select your project.
55+
1. If you're in the Management center, select **Go to project** to return to your project.
4556
1. From the collapsible left menu, select **Prompt flow**.
4657
1. Select **+ Create**.
4758
1. In the **Standard flow** tile, select **Create**.

articles/ai-studio/quickstarts/get-started-code.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ pip install azure-ai-projects azure-ai-inference azure-identity
4242

4343
Create a file named **chat.py**. Copy and paste the following code into it.
4444

45-
:::code language="python" source="~/azureai-samples-temp/scenarios/inference/chat-app/chat-simple.py":::
45+
:::code language="python" source="~/azureai-samples-main/scenarios/projects/basic/chat-simple.py":::
4646

4747
## Insert your connection string
4848

@@ -72,7 +72,7 @@ Let's change the script to take input from a client application and generate a s
7272

7373
1. Now define a `get_chat_response` function that takes messages and context, generates a system message using a prompt template, and calls a model. Add this code to your existing **chat.py** file:
7474

75-
:::code language="python" source="~/azureai-samples-temp/scenarios/inference/chat-app/chat-template.py" id="chat_function":::
75+
:::code language="python" source="~/azureai-samples-main/scenarios/projects/basic/chat-template.py" id="chat_function":::
7676

7777
> [!NOTE]
7878
> The prompt template uses mustache format.
@@ -81,7 +81,7 @@ Let's change the script to take input from a client application and generate a s
8181

8282
1. Now simulate passing information from a frontend application to this function. Add the following code to the end of your **chat.py** file. Feel free to play with the message and add your own name.
8383

84-
:::code language="python" source="~/azureai-samples-temp/scenarios/inference/chat-app/chat-template.py" id="create_response":::
84+
:::code language="python" source="~/azureai-samples-main/scenarios/projects/basic/chat-template.py" id="create_response":::
8585

8686
Run the revised script to see the response from the model with this new input.
8787

articles/ai-studio/tutorials/copilot-sdk-evaluate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ In Part 1 of this tutorial series, you created an **.env** file that specifies t
9999
1. Install the required package:
100100

101101
```bash
102-
pip install azure-ai-evaluation
102+
pip install azure-ai-evaluation[remote]
103103
```
104104

105105
1. Now run the evaluation script:

articles/machine-learning/v1/concept-automated-ml.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,7 @@ You can also inspect the logged job information, which [contains metrics](../how
197197

198198
While model building is automated, you can also [learn how important or relevant features are](../how-to-configure-auto-train.md) to the generated models.
199199

200-
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE2Xc9t]
200+
> [!VIDEO eddb2bd4-407e-470d-8fe9-6e60585b9910]
201201
202202
<a name="local-remote"></a>
203203

0 commit comments

Comments
 (0)