Skip to content

Commit 3f3c11f

Browse files
authored
update namespaces in evaluate-sdk.md
1 parent 036f62d commit 3f3c11f

File tree

1 file changed

+9
-10
lines changed

1 file changed

+9
-10
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 9 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -87,17 +87,16 @@ When using AI-assisted performance and quality metrics, you must specify a GPT m
8787
You can run the built-in evaluators by importing the desired evaluator class. Ensure that you set your environment variables.
8888
```python
8989
import os
90-
from promptflow.core import AzureOpenAIModelConfiguration
9190

9291
# Initialize Azure OpenAI Connection with your environment variables
93-
model_config = AzureOpenAIModelConfiguration(
94-
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
95-
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
96-
azure_deployment=os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
97-
api_version=os.environ.get("AZURE_OPENAI_API_VERSION"),
98-
)
92+
model_config = {
93+
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
94+
"api_key": os.environ.get("AZURE_OPENAI_API_KEY"),
95+
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
96+
"api_version": os.environ.get("AZURE_OPENAI_API_VERSION"),
97+
}
9998

100-
from azure.ai.evaluation.evaluators import RelevanceEvaluator
99+
from azure.ai.evaluation import RelevanceEvaluator
101100

102101
# Initialzing Relevance Evaluator
103102
relevance_eval = RelevanceEvaluator(model_config)
@@ -131,7 +130,7 @@ azure_ai_project = {
131130
"project_name": "<project_name>",
132131
}
133132

134-
from azure.ai.evaluation.evaluators import ViolenceEvaluator
133+
from azure.ai.evaluation import ViolenceEvaluator
135134

136135
# Initializing Violence Evaluator with project information
137136
violence_eval = ViolenceEvaluator(azure_ai_project)
@@ -329,7 +328,7 @@ After logging your custom evaluator to your AI Studio project, you can view it i
329328
After you spot-check your built-in or custom evaluators on a single row of data, you can combine multiple evaluators with the `evaluate()` API on an entire test dataset. In order to ensure the `evaluate()` can correctly parse the data, you must specify column mapping to map the column from the dataset to key words that are accepted by the evaluators. In this case, we specify the data mapping for `ground_truth`.
330329

331330
```python
332-
from azure.ai.evaluation.evaluate import evaluate
331+
from azure.ai.evaluation import evaluate
333332

334333
result = evaluate(
335334
data="data.jsonl", # provide your data here

0 commit comments

Comments
 (0)