Skip to content

Commit 12d87c4

Browse files
authored
break local and cloud eval into two docs
1 parent 1dade04 commit 12d87c4

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

articles/ai-studio/how-to/develop/cloud-evaluation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,18 +14,18 @@ ms.reviewer: changliu2
1414
ms.author: lagayhar
1515
author: lgayhardt
1616
---
17-
# Cloud Evaluation: Evaluate your Generative AI application remotely on the cloud
17+
# Cloud evaluation (Preview): evaluate your Generative AI application remotely on the cloud
1818

1919
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
2020

21-
While Azure AI Evaluation client SDK supports running evaluations locally on your own machine, you may want to delegate the job remotely to the cloud. For example, after you ran local evaluations on small test data to help assess your generative AI application prototypes, now you move into pre-deployment testing and need run evaluations on a large dataset. Cloud evaluation frees you from managing your local compute infrastructure, and enables you integrate evaluations as tests into your CI/CD pipelines. After deployment, you may want to [continuously evaluate](https://aka.ms/GenAIMonitoringDoc) your applications for post-deployment monitoring.
21+
While Azure AI Evaluation SDK client supports running evaluations locally on your own machine, you may want to delegate the job remotely to the cloud. For example, after you ran local evaluations on small test data to help assess your generative AI application prototypes, now you move into pre-deployment testing and need run evaluations on a large dataset. Cloud evaluation frees you from managing your local compute infrastructure, and enables you integrate evaluations as tests into your CI/CD pipelines. After deployment, you may want to [continuously evaluate](https://aka.ms/GenAIMonitoringDoc) your applications for post-deployment monitoring.
2222

2323
In this article, you learn how to run cloud evaluation in pre-deployment testing on a test dataset. Using the Azure AI Projects SDK, you will have evaluation results automatically logged into your Azure AI project for better observability. This feature support all Microsft-curated [built-in evaluators](./evaluate-sdk.md#built-in-evaluators) and your own [custom evaluators](./evaluate-sdk.md#custom-evaluators) which can be located in the [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) of your project.
2424

2525

2626
### Prerequisites
2727

28-
- Azure AI project in the same [regions](#region-support) as risk and safety evaluators (preview). If you don't have an existing project, follow the guide [How to create Azure AI project](../create-projects.md?tabs=ai-studio) to create one.
28+
- Azure AI project in the same [regions](./evaluate-sdk.md#region-support) as risk and safety evaluators (preview). If you don't have an existing project, follow the guide [How to create Azure AI project](../create-projects.md?tabs=ai-studio) to create one.
2929

3030
- Azure OpenAI Deployment with GPT model supporting `chat completion`, for example `gpt-4`.
3131
- `Connection String` for Azure AI project to easily create `AIProjectClient` object. You can get the **Project connection string** under **Project details** from the project's **Overview** page.

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -328,7 +328,7 @@ For conversation outputs, per-turn results are stored in a list and the overall
328328
> [!NOTE]
329329
> We strongly recommend users to migrate their code to use the key without prefixes (for example, `groundedness.groundedness`) to allow your code to support more evaluator models.
330330
331-
### Risk and safety evaluators (preview)
331+
### Risk and safety evaluators (Preview)
332332

333333
When you use AI-assisted risk and safety metrics, a GPT model isn't required. Instead of `model_config`, provide your `azure_ai_project` information. This accesses the Azure AI project safety evaluations back-end service, which provisions a GPT model specific to harms evaluation that can generate content risk severity scores and reasoning to enable the safety evaluators.
334334

0 commit comments

Comments
 (0)