Skip to content

Commit 1ba1672

Browse files
Additional edits.
1 parent 53c7969 commit 1ba1672

File tree

3 files changed

+15
-13
lines changed

3 files changed

+15
-13
lines changed

articles/ai-foundry/how-to/develop/cloud-evaluation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ To learn more about input data formats for evaluating generative AI applications
9090

9191
- [Single-turn data](./evaluate-sdk.md#single-turn-support-for-text)
9292
- [Conversation data](./evaluate-sdk.md#conversation-support-for-text)
93-
- [Conversation data for images and multi-modalities](./evaluate-sdk.md#conversation-support-for-images-and-multi-modal-text-and-image).
93+
- [Conversation data for images and multi-modalities](./evaluate-sdk.md#conversation-support-for-images-and-multi-modal-text-and-image)
9494

9595
To learn more about input data formats for evaluating agents, see [Evaluate Azure AI agents](./agent-evaluate-sdk.md#evaluate-azure-ai-agents) and [Evaluate other agents](./agent-evaluate-sdk.md#evaluating-other-agents).
9696

articles/ai-foundry/how-to/flow-deploy.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom:
1010
- ignite-2024
1111
- hub-only
1212
ms.topic: how-to
13-
ms.date: 10/15/2025
13+
ms.date: 10/18/2025
1414
ms.reviewer: none
1515
ms.author: lagayhar
1616
author: lgayhardt
@@ -22,7 +22,7 @@ ms.update-cycle: 180-days
2222

2323
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
2424

25-
After you build a prompt flow and test it properly, you can deploy it as an online endpoint. Deployments are hosted in an endpoint. They can receive data from clients and send responses in real time.
25+
After you build a prompt flow and test it, you can deploy it as an online endpoint. Deployments are hosted in an endpoint. They can receive data from clients and send responses in real time.
2626

2727
You can invoke the endpoint for real-time inference for chat, a copilot, or another generative AI application. Prompt flows support endpoint deployment from a flow or a bulk test run.
2828

@@ -40,9 +40,9 @@ In this article, you learn how to deploy a flow as a managed online endpoint for
4040

4141
To deploy a prompt flow as an online endpoint, you need:
4242

43-
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn) before you begin.
43+
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/purchase-options/azure-account?cid=msft_learn).
4444
- An Azure AI Foundry project.
45-
- A `Microsoft.PolicyInsights` resource provider registered in your subscription. For more information on how to register a resource provider, see [Register a resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1).
45+
- A `Microsoft.PolicyInsights` resource provider registered in your subscription. For more information, see [Register a resource provider](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider-1).
4646

4747
## Create an online deployment
4848

@@ -85,7 +85,7 @@ For information about how to deploy a base model, see [Deploy models with Azure
8585

8686
### Requirements text file
8787

88-
Optionally, you can specify extra packages that you need in `requirements.txt`. You can find `requirements.txt` in the root folder of your flow folder. When you deploy a prompt flow to a managed online endpoint in the UI, by default, the deployment uses the environment that was created based on the base image specified in `flow.dag.yaml` and the dependencies specified in `requirements.txt` of the flow.
88+
Optionally, you can specify extra packages that you need in `requirements.txt`. You can find `requirements.txt` in the root folder of your flow folder. When you deploy a prompt flow to a managed online endpoint in the UI, by default, the deployment uses the environment that was created based on the base image specified in `flow.dag.yaml` and the dependencies specified in `requirements.txt`.
8989

9090
The base image specified in `flow.dag.yaml` is created based on the prompt flow base image `mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<newest_version>`. To see the latest version, see [this list](https://mcr.microsoft.com/v2/azureml/promptflow/promptflow-runtime-stable/tags/list). If you don't specify the base image in `flow.dag.yaml`, the deployment uses the default base image `mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:latest`.
9191

@@ -123,7 +123,7 @@ This setting identifies the authentication method for the endpoint. Key-based au
123123

124124
The endpoint needs to access Azure resources for inferencing, such as Azure Container Registry or your Azure AI Foundry hub connections. You can allow the endpoint permission to access Azure resources by giving permission to its managed identity.
125125

126-
System-assigned identity is automatically created after your endpoint is created. The user creates the user-assigned identity. For more information, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
126+
System-assigned identity is created after your endpoint is created. The user creates the user-assigned identity. For more information, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
127127

128128
##### System assigned
129129

articles/ai-foundry/how-to/monitor-quality-safety.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- ignite-2024
1010
- hub-only
1111
ms.topic: how-to
12-
ms.date: 10/13/2025
12+
ms.date: 10/18/2025
1313
ms.reviewer: alehughes
1414
reviewer: ahughes-msft
1515
ms.author: lagayhar
@@ -42,7 +42,7 @@ Integrations for monitoring a prompt flow deployment allow you to:
4242

4343
[!INCLUDE [hub-only-prereq](../includes/hub-only-prereq.md)]
4444
- A prompt flow ready for deployment. If you don't have one, see [Develop a prompt flow](flow-develop.md).
45-
- Azure role-based access controls are used to grant access to operations in the Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the Azure AI Developer role on the resource group. For more information on permissions, see [Role-based access control for Azure AI Foundry](../concepts/rbac-azure-ai-foundry.md).
45+
- Azure role-based access controls are used to grant access to operations in the Azure AI Foundry portal. For this article, your user account must be assigned the Azure AI Developer role on the resource group. For more information, see [Role-based access control for Azure AI Foundry](../concepts/rbac-azure-ai-foundry.md).
4646

4747
# [Python SDK](#tab/python)
4848

@@ -56,7 +56,9 @@ pip install -U azure-ai-ml
5656

5757
## Requirements for monitoring metrics
5858

59-
Generative pretrained transformer (GPT) language models generate monitoring metrics that are configured with specific evaluation instructions, or *prompt templates*. These models act as evaluator models for sequence-to-sequence tasks. Use of this technique to generate monitoring metrics shows strong empirical results and high correlation with human judgment when compared to standard generative AI evaluation metrics. For more information about prompt flow evaluation, see [Submit a batch test and evaluate a flow](./flow-bulk-test-evaluation.md) and [Observability in generative AI](../concepts/observability.md).
59+
Generative pretrained transformer (GPT) language models generate monitoring metrics that are configured with specific evaluation instructions, or *prompt templates*. These models act as evaluator models for sequence-to-sequence tasks.
60+
61+
Using this technique to generate monitoring metrics shows strong empirical results and high correlation with human judgment when compared to standard generative AI evaluation metrics. For more information about prompt flow evaluation, see [Submit a batch test and evaluate a flow](./flow-bulk-test-evaluation.md) and [Observability in generative AI](../concepts/observability.md).
6062

6163
The following GPT models generate monitoring metrics. These GPT models are supported with monitoring and configured as your Azure OpenAI resource:
6264

@@ -96,11 +98,11 @@ The parameters that are configured in your data asset dictate what metrics you c
9698
| Groundedness | Required | Required | Required|
9799
| Relevance | Required | Required | Required|
98100

99-
For more information on the specific data mapping requirements for each metric, see [Query and response metric requirements](evaluate-generative-ai-app.md#query-and-response-metric-requirements).
101+
For information on the specific data mapping requirements for each metric, see [Query and response metric requirements](evaluate-generative-ai-app.md#query-and-response-metric-requirements).
100102

101103
## Set up monitoring for a prompt flow
102104

103-
To set up monitoring for your prompt flow application, first deploy your prompt flow application with inferencing data collection. Then you can configure monitoring for the deployed application.
105+
To set up monitoring for your prompt flow application, deploy your prompt flow application with inferencing data collection. Then configure monitoring for the deployed application.
104106

105107
### Deploy your prompt flow application with inferencing data collection
106108

@@ -119,7 +121,7 @@ In this section, you learn how to deploy your prompt flow with inferencing data
119121

120122
1. Confirm that your flow runs successfully and that the required inputs and outputs are configured for the [metrics that you want to assess](#supported-metrics-for-monitoring).
121123

122-
The minimum required parameters are question/inputs and answer/outputs. Supplying the minimum parameters provides only two metrics: _coherence_ and _fluency_. You must configure your flow as described in [Requirements for monitoring metrics](#requirements-for-monitoring-metrics). This example uses `question` (Question) and `chat_history` (Context) as the flow inputs, and `answer` (Answer) as the flow output.
124+
The minimum required parameters are question/inputs and answer/outputs. Supplying the minimum parameters provides only two metrics: _coherence_ and _fluency_. Configure your flow as described in [Requirements for monitoring metrics](#requirements-for-monitoring-metrics). This example uses `question` (Question) and `chat_history` (Context) as the flow inputs, and `answer` (Answer) as the flow output.
123125

124126
1. Select **Deploy** to begin deploying your flow.
125127

0 commit comments

Comments
 (0)