Skip to content

Commit 695989f

Browse files
authored
Update cloud-evaluation.md
1 parent 166df20 commit 695989f

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/ai-foundry/how-to/develop/cloud-evaluation.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The Azure AI Evaluation SDK supports running evaluations locally on your own mac
2222

2323
In this article, you learn how to run evaluations in the cloud (preview) in pre-deployment testing on a test dataset. When you use the Azure AI Projects SDK, evaluation results are automatically logged into your Azure AI project for better observability. This feature supports all Microsoft-curated [built-in evaluators](../../concepts/observability.md#what-are-evaluators) and your own [custom evaluators](../../concepts/evaluation-evaluators/custom-evaluators.md). Your evaluators can be located in the [Evaluator library](../evaluate-generative-ai-app.md#view-and-manage-the-evaluators-in-the-evaluator-library) and have the same project-scope role-based access control (RBAC).
2424

25-
## Prerequisites
25+
#Prerequisites
2626

2727
- Azure AI Foundry project in the same supported [regions](../../concepts/evaluation-evaluators/risk-safety-evaluators.md#azure-ai-foundry-project-configuration-and-region-support) as risk and safety evaluators (preview). If you don't have an existing project, create one by following the guide [How to create Azure AI Foundry project](../create-projects.md?tabs=ai-studio).
2828
- Azure OpenAI Deployment with GPT model supporting `chat completion`. For example, `gpt-4`.
@@ -50,12 +50,12 @@ If this is your first time running evaluations and logging it to your Azure AI F
5050
```python
5151
import os
5252

53-
# Required environment variables:
54-
endpoint = os.environ["PROJECT_ENDPOINT"] # https://<account>.services.ai.azure.com/api/projects/<project>
55-
model_endpoint = os.environ["MODEL_ENDPOINT"] # https://<account>.services.ai.azure.com
53+
Required environment variables:
54+
endpoint = os.environ["PROJECT_ENDPOINT"] https://<account>.services.ai.azure.com/api/projects/<project>
55+
model_endpoint = os.environ["MODEL_ENDPOINT"] https://<account>.services.ai.azure.com
5656
model_api_key = os.environ["MODEL_API_KEY"]
5757

58-
# Optional: Reuse an existing dataset.
58+
Optional: Reuse an existing dataset.
5959
dataset_name = os.environ.get("DATASET_NAME", "dataset-test")
6060
dataset_version = os.environ.get("DATASET_VERSION", "1.0")
6161
```
@@ -162,7 +162,7 @@ from azure.ai.ml import MLClient
162162
from azure.ai.ml.entities import Model
163163
from promptflow.client import PFClient
164164

165-
# Define ml_client to register the custom evaluator.
165+
# Define `ml_client` to register the custom evaluator.
166166
ml_client = MLClient(
167167
subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"],
168168
resource_group_name=os.environ["AZURE_RESOURCE_GROUP"],
@@ -213,7 +213,7 @@ model_config = dict(
213213
type="azure_openai"
214214
)
215215

216-
# Define ml_client to register the custom evaluator.
216+
# Define `ml_client` to register the custom evaluator.
217217
ml_client = MLClient(
218218
subscription_id=os.environ["AZURE_SUBSCRIPTION_ID"],
219219
resource_group_name=os.environ["AZURE_RESOURCE_GROUP"],

0 commit comments

Comments
 (0)