Skip to content

Commit 581ad56

Browse files
authored
Update evaluate-sdk.md
1 parent 50a8ae2 commit 581ad56

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/ai-foundry/how-to/develop/evaluate-sdk.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ from azure.ai.evaluation import RelevanceEvaluator
7272
query = "What is the cpital of life?"
7373
response = "Paris."
7474

75-
# Initializing an evaluator
75+
# Initialize an evaluator:
7676
relevance_eval = RelevanceEvaluator(model_config)
7777
relevance_eval(query=query, response=response)
7878
```
@@ -162,7 +162,7 @@ Our evaluators understand that the first turn of the conversation provides valid
162162
For conversation mode, here's an example for `GroundednessEvaluator`:
163163

164164
```python
165-
# Conversation mode
165+
# Conversation mode:
166166
import json
167167
import os
168168
from azure.ai.evaluation import GroundednessEvaluator, AzureOpenAIModelConfiguration
@@ -174,7 +174,7 @@ model_config = AzureOpenAIModelConfiguration(
174174
api_version=os.environ.get("AZURE_API_VERSION"),
175175
)
176176

177-
# Initializing the Groundedness and Groundedness Pro evaluators:
177+
# Initialize the Groundedness and Groundedness Pro evaluators:
178178
groundedness_eval = GroundednessEvaluator(model_config)
179179

180180
conversation = {
@@ -350,12 +350,12 @@ To ensure the `evaluate()` API can correctly parse the data, you must specify co
350350
from azure.ai.evaluation import evaluate
351351

352352
result = evaluate(
353-
data="data.jsonl", # Provide your data here
353+
data="data.jsonl", # Provide your data here:
354354
evaluators={
355355
"groundedness": groundedness_eval,
356356
"answer_length": answer_length
357357
},
358-
# Column mapping
358+
# Column mapping:
359359
evaluator_config={
360360
"groundedness": {
361361
"column_mapping": {
@@ -367,7 +367,7 @@ result = evaluate(
367367
},
368368
# Optionally, provide your Azure AI Foundry project information to track your evaluation results in your project portal.
369369
azure_ai_project = azure_ai_project,
370-
# Optionally, provide an output path to dump a JSON file of metric summary, row level data, and metric and Azure AI project URL.
370+
# Optionally, provide an output path to dump a JSON file of metric summary, row-level data, and the metric and Azure AI project URL.
371371
output_path="./myevalresults.json"
372372
)
373373
```

0 commit comments

Comments
 (0)