Skip to content

Commit 53c7969

Browse files
Freshness.
1 parent 36b0738 commit 53c7969

File tree

1 file changed

+21
-15
lines changed

1 file changed

+21
-15
lines changed

articles/ai-foundry/how-to/develop/cloud-evaluation.md

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
---
22
title: Cloud Evaluation with the Azure AI Foundry SDK
33
titleSuffix: Azure AI Foundry
4-
description: This article provides instructions on how to evaluate a generative AI application in the cloud.
4+
description: The Azure AI Evaluation SDK supports running evaluations locally or in the cloud. Learn how to evaluate a generative AI application.
55
ms.service: azure-ai-foundry
66
ms.custom:
77
- references_regions
88
- ignite-2024
99
ms.topic: how-to
10-
ms.date: 05/19/2025
10+
ms.date: 10/18/2025
1111
ms.reviewer: changliu2
1212
ms.author: lagayhar
1313
author: lgayhardt
@@ -17,20 +17,22 @@ author: lgayhardt
1717

1818
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
1919

20-
The Azure AI Evaluation SDK supports running evaluations locally on your own machine and in the cloud. For example, after you run local evaluations on small test data to help assess your generative AI application prototypes, you can move into pre-deployment testing and run evaluations on a large dataset. Evaluating your applications in the cloud frees you from managing your local compute infrastructure. It also enables you to integrate evaluations as tests into your continuous integration and continuous delivery (CI/CD) pipelines. After deployment, you can choose to [continuously monitor](../monitor-applications.md) your applications for post-deployment monitoring.
20+
In this article, you learn how to run evaluations in the cloud (preview) in pre-deployment testing on a test dataset. The Azure AI Evaluation SDK supports running evaluations locally on your machine and in the cloud. For example, you can run local evaluations on small test data to assess your generative AI application prototypes. Then move into pre-deployment testing and run evaluations on a large dataset.
2121

22-
In this article, you learn how to run evaluations in the cloud (preview) in pre-deployment testing on a test dataset. When you use the Azure AI Projects SDK, evaluation results are automatically logged into your Azure AI project for better observability. This feature supports all Microsoft-curated [built-in evaluators](../../concepts/observability.md#what-are-evaluators) and your own [custom evaluators](../../concepts/evaluation-evaluators/custom-evaluators.md). Your evaluators can be located in the [evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) and have the same project-scope role-based access control (RBAC).
22+
Evaluating your applications in the cloud frees you from managing your local compute infrastructure. You can also integrate evaluations as tests into your continuous integration and continuous delivery pipelines. After deployment, you can [continuously monitor](../monitor-applications.md) your applications for post-deployment monitoring.
23+
24+
When you use the Azure AI Projects SDK, it logs evaluation results in your Azure AI project for better observability. This feature supports all Microsoft-curated [built-in evaluators](../../concepts/observability.md#what-are-evaluators) and your own [custom evaluators](../../concepts/evaluation-evaluators/custom-evaluators.md). Your evaluators can be located in the [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) and have the same project-scope role-based access control.
2325

2426
## Prerequisites
2527

26-
- Azure AI Foundry project in the same supported [regions](../../concepts/evaluation-evaluators/risk-safety-evaluators.md#azure-ai-foundry-project-configuration-and-region-support) as risk and safety evaluators (preview). If you don't have an existing project, create one. See [Create a project for Azure AI Foundry](../create-projects.md?tabs=ai-studio).
28+
- Azure AI Foundry project in the same supported [regions](../../concepts/evaluation-evaluators/risk-safety-evaluators.md#azure-ai-foundry-project-configuration-and-region-support) as risk and safety evaluators (preview). If you don't have a project, create one. See [Create a project for Azure AI Foundry](../create-projects.md?tabs=ai-studio).
2729
- Azure OpenAI Deployment with GPT model supporting `chat completion`. For example, `gpt-4`.
28-
- Make sure you're first logged into your Azure subscription by running `az login`.
30+
- Make sure you're logged into your Azure subscription by running `az login`.
2931

3032
[!INCLUDE [evaluation-foundry-project-storage](../../includes/evaluation-foundry-project-storage.md)]
3133

3234
> [!NOTE]
33-
> Virtual Network configurations are currently not supported for cloud-based evaluations. Ensure that public network access is enabled for your Azure OpenAI resource.
35+
> Virtual network configurations are currently not supported for cloud-based evaluations. Enable public network access for your Azure OpenAI resource.
3436
3537
## Get started
3638

@@ -41,9 +43,9 @@ In this article, you learn how to run evaluations in the cloud (preview) in pre-
4143
```
4244

4345
> [!NOTE]
44-
> For more detailed information, see [REST API Reference Documentation](/rest/api/aifoundry/aiprojects/evaluations).
46+
> For more information, see [REST API Reference Documentation](/rest/api/aifoundry/aiprojects/evaluations).
4547
46-
2. Set your environment variables for your Azure AI Foundry resources:
48+
1. Set your environment variables for your Azure AI Foundry resources:
4749

4850
```python
4951
import os
@@ -59,7 +61,7 @@ In this article, you learn how to run evaluations in the cloud (preview) in pre-
5961
dataset_version = os.environ.get("DATASET_VERSION", "1.0")
6062
```
6163

62-
3. Now, you can define a client that runs your evaluations in the cloud:
64+
1. Define a client that runs your evaluations in the cloud:
6365

6466
```python
6567
import os
@@ -84,7 +86,11 @@ data_id = project_client.datasets.upload_file(
8486
).id
8587
```
8688

87-
To learn more about input data formats for evaluating generative AI applications, see [Single-turn data](./evaluate-sdk.md#single-turn-support-for-text), [Conversation data](./evaluate-sdk.md#conversation-support-for-text), and [Conversation data for images and multi-modalities](./evaluate-sdk.md#conversation-support-for-images-and-multi-modal-text-and-image).
89+
To learn more about input data formats for evaluating generative AI applications:
90+
91+
- [Single-turn data](./evaluate-sdk.md#single-turn-support-for-text)
92+
- [Conversation data](./evaluate-sdk.md#conversation-support-for-text)
93+
- [Conversation data for images and multi-modalities](./evaluate-sdk.md#conversation-support-for-images-and-multi-modal-text-and-image).
8894

8995
To learn more about input data formats for evaluating agents, see [Evaluate Azure AI agents](./agent-evaluate-sdk.md#evaluate-azure-ai-agents) and [Evaluate other agents](./agent-evaluate-sdk.md#evaluating-other-agents).
9096

@@ -193,11 +199,11 @@ versioned_evaluator = ml_client.evaluators.get(evaluator_name, version=1)
193199
print("Versioned evaluator id:", registered_evaluator.id)
194200
```
195201

196-
After you register your custom evaluator to your Azure AI project, you can view it in your [evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under the **Evaluation** tab in your Azure AI project.
202+
After you register your custom evaluator, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library). In your Azure AI Foundry project, select **Evaluation**, then select **Evaluator library**.
197203

198204
### Prompt-based custom evaluators
199205

200-
Follow the example to register a custom `FriendlinessEvaluator` built as described in [Prompt-based evaluators](../../concepts/evaluation-evaluators/custom-evaluators.md#prompt-based-evaluators):
206+
Follow this example to register a custom `FriendlinessEvaluator` built as described in [Prompt-based evaluators](../../concepts/evaluation-evaluators/custom-evaluators.md#prompt-based-evaluators):
201207

202208
```python
203209
# Import your prompt-based custom evaluator.
@@ -241,11 +247,11 @@ versioned_evaluator = ml_client.evaluators.get(evaluator_name, version=1)
241247
print("Versioned evaluator id:", registered_evaluator.id)
242248
```
243249

244-
After you log your custom evaluator to your Azure AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under the **Evaluation** tab of your Azure AI project.
250+
After you register your custom evaluator, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library). In your Azure AI Foundry project, select **Evaluation**, then select **Evaluator library**.
245251

246252
### Troubleshooting: Job Stuck in Running State
247253

248-
Your evaluation job might remain in the **Running** state for an extended period when using Azure AI Foundry Project or Hub. This problem might be due to the Azure OpenAI model you selected doesn't have enough capacity.
254+
Your evaluation job might remain in the **Running** state for an extended period when using Azure AI Foundry Project or Hub. The Azure OpenAI model you selected might not have enough capacity.
249255

250256
**Resolution**
251257

0 commit comments

Comments
 (0)