Skip to content

Commit 6c7866d

Browse files
committed
Merge branch 'main' into release-preview-eval-redteaming
2 parents 29c1d08 + e9b4143 commit 6c7866d

13 files changed

+96
-39
lines changed

.github/policies/disallow-edits.yml

Lines changed: 60 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,14 +19,18 @@ configuration:
1919
@${issueAuthor} - You tried to add an index file to this repository; this is not permitted so your pull request will be closed automatically.
2020
- closePullRequest
2121

22-
- description: Close PRs to the "ai-services/personalizer" folder where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
22+
- description: Close PRs to the "ai-services/personalizer" and "ai-services/responsible-ai" folders where the author isn't a member of the MicrosoftDocs org (i.e. PRs in public repo).
2323
if:
2424
- payloadType: Pull_Request
2525
- isAction:
2626
action: Opened
27-
- filesMatchPattern:
28-
matchAny: true
29-
pattern: articles/ai-services/personalizer/*
27+
- or:
28+
- filesMatchPattern:
29+
matchAny: true
30+
pattern: articles/ai-services/personalizer/*
31+
- filesMatchPattern:
32+
matchAny: true
33+
pattern: articles/ai-services/responsible-ai/*
3034
- not:
3135
activitySenderHasAssociation:
3236
association: Member
@@ -65,3 +69,55 @@ configuration:
6569
- mrbullwinkle
6670
replyTemplate: ${mentionees} - Please review this PR and sign off when you're ready to merge it.
6771
assignMentionees: True # This part probably won't work since the bot doesn't have write perms.
72+
- addLabel:
73+
label: needs-human-review
74+
75+
- description: \@mention specific people when a PR is opened in the "ai-services/responsible-ai" folder.
76+
if:
77+
- payloadType: Pull_Request
78+
- isAction:
79+
action: Opened
80+
- filesMatchPattern:
81+
matchAny: true
82+
pattern: articles/ai-services/responsible-ai/*
83+
- activitySenderHasAssociation:
84+
association: Member
85+
- not:
86+
or:
87+
- isActivitySender:
88+
user: eric-urban
89+
- isActivitySender:
90+
user: nitinme
91+
- isActivitySender:
92+
user: mrbullwinkle
93+
- isActivitySender:
94+
user: aahill
95+
- isActivitySender:
96+
user: laujan
97+
- isActivitySender:
98+
user: patrickfarley
99+
- isActivitySender:
100+
user: jboback
101+
- isActivitySender:
102+
user: heidisteen
103+
- isActivitySender:
104+
user: haileytap
105+
then:
106+
- addReply:
107+
reply: >-
108+
@${issueAuthor} - Please don't sign off on this PR. The area owners will sign off once they've reviewed your contribution.
109+
- mentionUsers:
110+
mentionees:
111+
- eric-urban
112+
- nitinme
113+
- mrbullwinkle
114+
- aahill
115+
- laujan
116+
- patrickfarley
117+
- jboback
118+
- heidisteen
119+
- haileytap
120+
replyTemplate: ${mentionees} - Please review this PR and sign off when you're ready to merge it.
121+
assignMentionees: True # This part probably won't work since the bot doesn't have write perms.
122+
- addLabel:
123+
label: needs-human-review

articles/ai-foundry/concepts/trace.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ ms.service: azure-ai-foundry
77
ms.custom:
88
- ignite-2024
99
ms.topic: conceptual
10-
ms.date: 03/12/2025
10+
ms.date: 03/31/2025
1111
ms.reviewer: truptiparkar
1212
ms.author: lagayhar
1313
author: lgayhardt

articles/ai-foundry/how-to/develop/trace-local-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- build-2024
99
- ignite-2024
1010
ms.topic: how-to
11-
ms.date: 03/12/2025
11+
ms.date: 03/31/2025
1212
ms.reviewer: truptiparkar
1313
ms.author: lagayhar
1414
author: lgayhardt

articles/ai-foundry/how-to/evaluate-results.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: how-to
12-
ms.date: 12/18/2024
12+
ms.date: 03/31/2025
1313
ms.reviewer: wenxwei
1414
ms.author: lagayhar
1515
author: lgayhardt

articles/ai-foundry/how-to/flow-develop-evaluation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 5/21/2024
11+
ms.date: 3/31/2025
1212
ms.reviewer: mithigpe
1313
ms.author: lagayhar
1414
author: lgayhardt
@@ -31,12 +31,12 @@ In prompt flow, you can customize or create your own evaluation flow tailored to
3131
There are two ways to develop your own evaluation methods:
3232

3333
- **Customize a Built-in Evaluation Flow:** Modify a built-in evaluation flow.
34-
1. Under *Tools* select **Prompt flow**.
34+
1. Under *Build and customize* select **Prompt flow**.
3535
2. Select **Create** to open the flow creation wizard.
3636
3. In the flow gallery under *Explore gallery* select **Evaluation flow** to filter by that type. Pick a sample and select **Clone** to do customization.
3737

3838
- **Create a New Evaluation Flow from Scratch:** Develop a brand-new evaluation method from the ground up.
39-
1. Under *Tools* select **Prompt flow**.
39+
1. Under *Build and customize* select **Prompt flow**.
4040
2. Select **Create** to open the flow creation wizard.
4141
3. In the flow gallery under *Create by type* in the "Evaluation flow" box select **Create** then you can see a template of evaluation flow.
4242

articles/ai-foundry/how-to/flow-tune-prompts-using-variants.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 5/21/2024
11+
ms.date: 3/31/2025
1212
ms.reviewer: none
1313
ms.author: lagayhar
1414
author: lgayhardt
@@ -53,7 +53,7 @@ Benefits of using variants include:
5353
In this article, we use **Web Classification** sample flow as example.
5454

5555
1. Open the sample flow and remove the **prepare_examples** node as a start.
56-
1. Under *Tools* select **Prompt flow**.
56+
1. Under *Build and customize* select **Prompt flow**.
5757
2. Select **Create** to open the flow creation wizard.
5858
3. In the flow gallery under *Explore gallery* in the "Web Classification" box select **Clone**.
5959
4. In the flow tab, delete the **prepare_examples** node.
@@ -67,7 +67,7 @@ The classification will be based on the url, the webpage text content summary, o
6767
For a given URL : {{url}}, and text content: {{text_content}}.
6868
Classify above url to complete the category and indicate evidence.
6969
70-
The output shoule be in this format: {"category": "App", "evidence": "Both"}
70+
The output should be in this format: {"category": "App", "evidence": "Both"}
7171
OUTPUT:
7272
```
7373

@@ -147,7 +147,7 @@ When you run the variants with a few single pieces of data and check the results
147147
You can submit a batch run, which allows you test the variants with a large amount of data and evaluate them with metrics, to help you find the best fit.
148148

149149
1. First you need to prepare a dataset, which is representative enough of the real-world problem you want to solve with Prompt flow. In this example, it's a list of URLs and their classification ground truth. We use accuracy to evaluate the performance of variants.
150-
2. Select **Evaluate** on the top right of the page.
150+
2. Select **Evaluate** on the top right of the page then select **Custom Evaluation**.
151151
3. A wizard for **Batch run & Evaluate** occurs. The first step is to select a node to run all its variants.
152152

153153
To test how different variants work for each node in a flow, you need to run a batch run for each node with variants one by one. This helps you avoid the influence of other nodes' variants and focus on the results of this node's variants. This follows the rule of the controlled experiment, which means that you only change one thing at a time and keep everything else the same.

articles/ai-foundry/model-inference/quotas-limits.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,11 +32,11 @@ Azure uses quotas and limits to prevent budget overruns due to fraud, and to hon
3232
| -------------------- | ------------------- | ----------- |
3333
| Tokens per minute | Azure OpenAI models | Varies per model and SKU. See [limits for Azure OpenAI](../../ai-services/openai/quotas-limits.md). |
3434
| Requests per minute | Azure OpenAI models | Varies per model and SKU. See [limits for Azure OpenAI](../../ai-services/openai/quotas-limits.md). |
35-
| Tokens per minute | DeepSeek-R1 | 5.000.000 |
36-
| Requests per minute | DeepSeek-R1 | 5.000 |
35+
| Tokens per minute | DeepSeek-R1 | 5,000,000 |
36+
| Requests per minute | DeepSeek-R1 | 5,000 |
3737
| Concurrent requests | DeepSeek-R1 | 300 |
38-
| Tokens per minute | Rest of models | 200.000 |
39-
| Requests per minute | Rest of models | 1.000 |
38+
| Tokens per minute | Rest of models | 400,000 |
39+
| Requests per minute | Rest of models | 1,000 |
4040
| Concurrent requests | Rest of models | 300 |
4141

4242
You can [request increases to the default limits](#request-increases-to-the-default-limits). Due to high demand, limit increase requests can be submitted and evaluated per request.

articles/ai-services/openai/how-to/stored-completions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-versi
115115
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
116116
-d '{
117117
"model": "gpt-4o",
118-
"store": True,
118+
"store": true,
119119
"messages": [
120120
{
121121
"role": "system",
@@ -137,7 +137,7 @@ curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-versi
137137
-H "api-key: $AZURE_OPENAI_API_KEY" \
138138
-d '{
139139
"model": "gpt-4o",
140-
"store": True,
140+
"store": true,
141141
"messages": [
142142
{
143143
"role": "system",

articles/machine-learning/concept-responsible-ai-scorecard.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,34 @@
11
---
22
title: Share Responsible AI insights and make data-driven decisions with Azure Machine Learning Responsible AI scorecard
33
titleSuffix: Azure Machine Learning
4-
description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with non-technical and technical stakeholders.
4+
description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with nontechnical and technical stakeholders.
55
services: machine-learning
66
ms.service: azure-machine-learning
77
ms.subservice: responsible-ai
88
ms.topic: conceptual
99
ms.author: lagayhar
1010
author: lgayhardt
1111
ms.reviewer: mesameki
12-
ms.date: 02/27/2024
12+
ms.date: 03/31/2025
1313
ms.custom: responsible-ml, build-2023, build-2023-dataai
1414
---
1515

1616
# Share Responsible AI insights using the Responsible AI scorecard (preview)
1717

1818
Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. While it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
1919

20-
- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
21-
- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
22-
- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
20+
- The gap between the technical Responsible AI tools (designed for machine learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
21+
- The need for effective multi-stakeholder alignment in an end-to-end machine learning lifecycle, ensuring technical experts receive timely feedback and direction from nontechnical stakeholders.
22+
- The ability to share model and data insights with auditors and risk officers for auditability purposes, as required by AI regulations.
2323

24-
One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily.
24+
One of the biggest benefits of using the Azure Machine Learning ecosystem is the ability to archive model and data insights in the Azure Machine Learning Run History for quick reference in the future. As part of this infrastructure, and to complement machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard. This scorecard empowers machine learning professionals to easily generate and share their data and model health records.
2525

2626
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
2727

2828
## Who should use a Responsible AI scorecard?
2929

30-
- If you're a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment.
31-
32-
- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
30+
- **Data scientists and machine learning professionals**: After training your model and generating its corresponding Responsible AI dashboard for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard. This allows you to easily share the report with your technical and nontechnical stakeholders, building trust and gaining their approval for deployment.
31+
- **Product managers, business leaders, and accountable stakeholders on an AI product**: You can provide your desired model performance and fairness target values, such as target accuracy and target error rate, to your data science team. They can then generate the scorecard based on these target values to determine whether the model meets them. This helps guide decisions on whether the model should be deployed or further improved.
3332

3433
## Next steps
3534

articles/machine-learning/concept-secure-code-best-practice.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ ms.topic: conceptual
99
ms.author: larryfr
1010
author: Blackmist
1111
ms.reviewer: deeikele
12-
ms.date: 04/02/2024
12+
ms.date: 04/01/2025
1313
---
1414

1515
# Best practices for secure code
1616

1717
In Azure Machine Learning, you can upload files and content from any source into Azure. Content within Jupyter notebooks or scripts that you load can potentially read data from your sessions, access sensitive data within your organization in Azure, or run malicious processes on your behalf.
1818

1919
> [!IMPORTANT]
20-
> Only run notebooks or scripts from trusted sources. For example, where you or your security team have reviewed the notebook or script.
20+
> Only run notebooks or scripts from trusted sources. For example, where you or your security team reviewed the notebook or script.
2121
2222
## Potential threats
2323

0 commit comments

Comments
 (0)