Skip to content

Commit 062475d

Browse files
authored
Merge pull request #3852 from MicrosoftDocs/main
4/1/2025 AM Publish
2 parents 5346679 + e9b4143 commit 062475d

File tree

7 files changed

+26
-27
lines changed

7 files changed

+26
-27
lines changed

articles/ai-foundry/how-to/flow-tune-prompts-using-variants.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- ignite-2023
99
- build-2024
1010
ms.topic: how-to
11-
ms.date: 5/21/2024
11+
ms.date: 3/31/2025
1212
ms.reviewer: none
1313
ms.author: lagayhar
1414
author: lgayhardt
@@ -53,7 +53,7 @@ Benefits of using variants include:
5353
In this article, we use **Web Classification** sample flow as example.
5454

5555
1. Open the sample flow and remove the **prepare_examples** node as a start.
56-
1. Under *Tools* select **Prompt flow**.
56+
1. Under *Build and customize* select **Prompt flow**.
5757
2. Select **Create** to open the flow creation wizard.
5858
3. In the flow gallery under *Explore gallery* in the "Web Classification" box select **Clone**.
5959
4. In the flow tab, delete the **prepare_examples** node.
@@ -67,7 +67,7 @@ The classification will be based on the url, the webpage text content summary, o
6767
For a given URL : {{url}}, and text content: {{text_content}}.
6868
Classify above url to complete the category and indicate evidence.
6969
70-
The output shoule be in this format: {"category": "App", "evidence": "Both"}
70+
The output should be in this format: {"category": "App", "evidence": "Both"}
7171
OUTPUT:
7272
```
7373

@@ -147,7 +147,7 @@ When you run the variants with a few single pieces of data and check the results
147147
You can submit a batch run, which allows you test the variants with a large amount of data and evaluate them with metrics, to help you find the best fit.
148148

149149
1. First you need to prepare a dataset, which is representative enough of the real-world problem you want to solve with Prompt flow. In this example, it's a list of URLs and their classification ground truth. We use accuracy to evaluate the performance of variants.
150-
2. Select **Evaluate** on the top right of the page.
150+
2. Select **Evaluate** on the top right of the page then select **Custom Evaluation**.
151151
3. A wizard for **Batch run & Evaluate** occurs. The first step is to select a node to run all its variants.
152152

153153
To test how different variants work for each node in a flow, you need to run a batch run for each node with variants one by one. This helps you avoid the influence of other nodes' variants and focus on the results of this node's variants. This follows the rule of the controlled experiment, which means that you only change one thing at a time and keep everything else the same.

articles/ai-foundry/model-inference/quotas-limits.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,11 +32,11 @@ Azure uses quotas and limits to prevent budget overruns due to fraud, and to hon
3232
| -------------------- | ------------------- | ----------- |
3333
| Tokens per minute | Azure OpenAI models | Varies per model and SKU. See [limits for Azure OpenAI](../../ai-services/openai/quotas-limits.md). |
3434
| Requests per minute | Azure OpenAI models | Varies per model and SKU. See [limits for Azure OpenAI](../../ai-services/openai/quotas-limits.md). |
35-
| Tokens per minute | DeepSeek-R1 | 5.000.000 |
36-
| Requests per minute | DeepSeek-R1 | 5.000 |
35+
| Tokens per minute | DeepSeek-R1 | 5,000,000 |
36+
| Requests per minute | DeepSeek-R1 | 5,000 |
3737
| Concurrent requests | DeepSeek-R1 | 300 |
38-
| Tokens per minute | Rest of models | 200.000 |
39-
| Requests per minute | Rest of models | 1.000 |
38+
| Tokens per minute | Rest of models | 400,000 |
39+
| Requests per minute | Rest of models | 1,000 |
4040
| Concurrent requests | Rest of models | 300 |
4141

4242
You can [request increases to the default limits](#request-increases-to-the-default-limits). Due to high demand, limit increase requests can be submitted and evaluated per request.

articles/ai-services/openai/how-to/stored-completions.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-versi
115115
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
116116
-d '{
117117
"model": "gpt-4o",
118-
"store": True,
118+
"store": true,
119119
"messages": [
120120
{
121121
"role": "system",
@@ -137,7 +137,7 @@ curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-versi
137137
-H "api-key: $AZURE_OPENAI_API_KEY" \
138138
-d '{
139139
"model": "gpt-4o",
140-
"store": True,
140+
"store": true,
141141
"messages": [
142142
{
143143
"role": "system",

articles/machine-learning/concept-responsible-ai-scorecard.md

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,35 +1,34 @@
11
---
22
title: Share Responsible AI insights and make data-driven decisions with Azure Machine Learning Responsible AI scorecard
33
titleSuffix: Azure Machine Learning
4-
description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with non-technical and technical stakeholders.
4+
description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with nontechnical and technical stakeholders.
55
services: machine-learning
66
ms.service: azure-machine-learning
77
ms.subservice: responsible-ai
88
ms.topic: conceptual
99
ms.author: lagayhar
1010
author: lgayhardt
1111
ms.reviewer: mesameki
12-
ms.date: 02/27/2024
12+
ms.date: 03/31/2025
1313
ms.custom: responsible-ml, build-2023, build-2023-dataai
1414
---
1515

1616
# Share Responsible AI insights using the Responsible AI scorecard (preview)
1717

1818
Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions. While it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
1919

20-
- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
21-
- While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders.
22-
- AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
20+
- The gap between the technical Responsible AI tools (designed for machine learning professionals) and the ethical, regulatory, and business requirements that define the production environment.
21+
- The need for effective multi-stakeholder alignment in an end-to-end machine learning lifecycle, ensuring technical experts receive timely feedback and direction from nontechnical stakeholders.
22+
- The ability to share model and data insights with auditors and risk officers for auditability purposes, as required by AI regulations.
2323

24-
One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily.
24+
One of the biggest benefits of using the Azure Machine Learning ecosystem is the ability to archive model and data insights in the Azure Machine Learning Run History for quick reference in the future. As part of this infrastructure, and to complement machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard. This scorecard empowers machine learning professionals to easily generate and share their data and model health records.
2525

2626
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
2727

2828
## Who should use a Responsible AI scorecard?
2929

30-
- If you're a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment.
31-
32-
- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
30+
- **Data scientists and machine learning professionals**: After training your model and generating its corresponding Responsible AI dashboard for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard. This allows you to easily share the report with your technical and nontechnical stakeholders, building trust and gaining their approval for deployment.
31+
- **Product managers, business leaders, and accountable stakeholders on an AI product**: You can provide your desired model performance and fairness target values, such as target accuracy and target error rate, to your data science team. They can then generate the scorecard based on these target values to determine whether the model meets them. This helps guide decisions on whether the model should be deployed or further improved.
3332

3433
## Next steps
3534

articles/machine-learning/concept-secure-code-best-practice.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ ms.topic: conceptual
99
ms.author: larryfr
1010
author: Blackmist
1111
ms.reviewer: deeikele
12-
ms.date: 04/02/2024
12+
ms.date: 04/01/2025
1313
---
1414

1515
# Best practices for secure code
1616

1717
In Azure Machine Learning, you can upload files and content from any source into Azure. Content within Jupyter notebooks or scripts that you load can potentially read data from your sessions, access sensitive data within your organization in Azure, or run malicious processes on your behalf.
1818

1919
> [!IMPORTANT]
20-
> Only run notebooks or scripts from trusted sources. For example, where you or your security team have reviewed the notebook or script.
20+
> Only run notebooks or scripts from trusted sources. For example, where you or your security team reviewed the notebook or script.
2121
2222
## Potential threats
2323

articles/machine-learning/how-to-troubleshoot-managed-network.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.reviewer: None
99
ms.author: larryfr
1010
author: Blackmist
1111
ms.topic: troubleshooting
12-
ms.date: 04/30/2024
12+
ms.date: 04/01/2025
1313
ms.custom: build-2023
1414
---
1515

@@ -29,11 +29,11 @@ To use an Azure Virtual Network when creating a workspace through the Azure port
2929
1. In the __Workspace Outbound access__ section, select __Use my own virtual network__.
3030
1. Continue to create the workspace as normal.
3131

32-
## Does not have authorization to perform action 'Microsoft.MachineLearningServices<br/> /workspaces/privateEndpointConnections/read'
32+
## Doesn't have authorization to perform action 'Microsoft.MachineLearningServices<br/> /workspaces/privateEndpointConnections/read'
3333

3434
When you create a managed virtual network, the operation can fail with an error similar to the following text:
3535

36-
"The client '\<GUID\>' with object id '\<GUID\>' doesn't have authorization to perform action 'Microsoft.MachineLearningServices/workspaces/privateEndpointConnections/read' over scope '/subscriptions/\<GUID\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.MachineLearningServices/workspaces/\<workspace-name\>' or the scope is invalid."
36+
"The client '\<GUID\>' with object ID '\<GUID\>' doesn't have authorization to perform action 'Microsoft.MachineLearningServices/workspaces/privateEndpointConnections/read' over scope '/subscriptions/\<GUID\>/resourceGroups/\<resource-group-name\>/providers/Microsoft.MachineLearningServices/workspaces/\<workspace-name\>' or the scope is invalid."
3737

3838
This error occurs when the Azure identity used to create the managed virtual network doesn't have the following Azure role-based access control permissions:
3939

articles/machine-learning/v1/how-to-debug-pipelines.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ description: How to troubleshoot when you get errors running a machine learning
55
services: machine-learning
66
ms.service: azure-machine-learning
77
ms.subservice: mlops
8-
ms.author: zhanxia
9-
author: xiaoharper
10-
ms.date: 11/04/2022
8+
ms.author: lagayhar
9+
author: lgayhardt
10+
ms.date: 03/31/2025
1111
ms.topic: troubleshooting
1212
ms.custom: UpdateFrequency5, troubleshooting, sdkv1
1313
#Customer intent: As a data scientist, I want to figure out why my pipeline doesn't run so that I can fix it.

0 commit comments

Comments
 (0)