Skip to content

Commit 8cc51a6

Browse files
committed
Eval: Add preview to features
1 parent 820dd86 commit 8cc51a6

File tree

2 files changed

+7
-7
lines changed

2 files changed

+7
-7
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ For conversation outputs, per-turn results are stored in a list and the overall
329329
> [!NOTE]
330330
> We strongly recommend users to migrate their code to use the key without prefixes (for example, `groundedness.groundedness`) to allow your code to support more evaluator models.
331331
332-
### Risk and safety evaluators
332+
### Risk and safety evaluators (preview)
333333

334334
When you use AI-assisted risk and safety metrics, a GPT model isn't required. Instead of `model_config`, provide your `azure_ai_project` information. This accesses the Azure AI project safety evaluations back-end service, which provisions a GPT model specific to harms evaluation that can generate content risk severity scores and reasoning to enable the safety evaluators.
335335

@@ -738,13 +738,13 @@ result = evaluate(
738738

739739
```
740740

741-
## Cloud evaluation on test datasets
741+
## Cloud evaluation (preview) on test datasets
742742

743743
After local evaluations of your generative AI applications, you might want to run evaluations in the cloud for pre-deployment testing, and [continuously evaluate](https://aka.ms/GenAIMonitoringDoc) your applications for post-deployment monitoring. Azure AI Projects SDK offers such capabilities via a Python API and supports almost all of the features available in local evaluations. Follow the steps below to submit your evaluation to the cloud on your data using built-in or custom evaluators.
744744

745745
### Prerequisites
746746

747-
- Azure AI project in the same [regions](#region-support) as risk and safety evaluators. If you don't have an existing project, follow the guide [How to create Azure AI project](../create-projects.md?tabs=ai-studio) to create one.
747+
- Azure AI project in the same [regions](#region-support) as risk and safety evaluators (preview). If you don't have an existing project, follow the guide [How to create Azure AI project](../create-projects.md?tabs=ai-studio) to create one.
748748

749749
> [!NOTE]
750750
> Cloud evaluations do not support `ContentSafetyEvaluator`, and `QAEvaluator`.
@@ -919,7 +919,7 @@ print("Versioned evaluator id:", registered_evaluator.id)
919919

920920
After logging your custom evaluator to your Azure AI project, you can view it in your [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) under **Evaluation** tab of your Azure AI project.
921921

922-
### Cloud evaluation with Azure AI Projects SDK
922+
### Cloud evaluation (preview) with Azure AI Projects SDK
923923

924924
You can submit a cloud evaluation with Azure AI Projects SDK via a Python API. See the following example to submit a cloud evaluation of your dataset using an NLP evaluator (F1 score), an AI-assisted quality evaluator (Relevance), a safety evaluator (Violence) and a custom evaluator. Putting it altogether:
925925

articles/ai-studio/how-to/develop/simulator-interaction-data.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ ms.author: lagayhar
1515
author: lgayhardt
1616
---
1717

18-
# Generate synthetic and simulated data for evaluation
18+
# Generate synthetic and simulated data for evaluation (preview)
1919

2020
[!INCLUDE [feature-preview](../../includes/feature-preview.md)]
2121

@@ -28,15 +28,15 @@ In this article, you'll learn how to holistically generate high-quality datasets
2828

2929
## Getting started
3030

31-
First install and import the simulator package from the Azure AI Evaluation SDK:
31+
First install and import the simulator package (preview) from the Azure AI Evaluation SDK:
3232

3333
```python
3434
pip install azure-ai-evaluation
3535
```
3636

3737
## Generate synthetic data and simulate non-adversarial tasks
3838

39-
Azure AI Evaluation SDK's `Simulator` provides an end-to-end synthetic data generation capability to help developers test their application's response to typical user queries in the absence of production data. AI developers can use an index or text-based query generator and fully customizable simulator to create robust test datasets around non-adversarial tasks specific to their application. The `Simulator` class is a powerful tool designed to generate synthetic conversations and simulate task-based interactions. This capability is useful for:
39+
Azure AI Evaluation SDK's `Simulator` (preview) provides an end-to-end synthetic data generation capability to help developers test their application's response to typical user queries in the absence of production data. AI developers can use an index or text-based query generator and fully customizable simulator to create robust test datasets around non-adversarial tasks specific to their application. The `Simulator` class is a powerful tool designed to generate synthetic conversations and simulate task-based interactions. This capability is useful for:
4040

4141
- **Testing Conversational Applications**: Ensure your chatbots and virtual assistants respond accurately under various scenarios.
4242
- **Training AI Models**: Generate diverse datasets to train and fine-tune machine learning models.

0 commit comments

Comments
 (0)