Skip to content

Commit 8d20d9d

Browse files
Acro fixes
1 parent 8447e1c commit 8d20d9d

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ For evaluators that support conversations as input, you can just pass in the con
9696
relevance_score = relevance_eval(conversation=conversation)
9797
```
9898

99-
A conversation is a python dictionary of a list of messages (which include content, role, and optionally context). The following is an example of a two-turn conversation.
99+
A conversation is a Python dictionary of a list of messages (which include content, role, and optionally context). The following is an example of a two-turn conversation.
100100

101101
```json
102102
{"conversation":
@@ -172,7 +172,7 @@ Here's an example of the result:
172172

173173
### Risk and safety evaluators
174174

175-
When you use AI-assisted risk and safety metrics, a GPT model isn't required. Instead of `model_config`, provide your `azure_ai_project` information. This accesses the Azure AI Studio safety evaluations back-end service, which provisions an GPT model specific to harms evaluation that can generate content risk severity scores and reasoning to enable the safety evaluators.
175+
When you use AI-assisted risk and safety metrics, a GPT model isn't required. Instead of `model_config`, provide your `azure_ai_project` information. This accesses the Azure AI Studio safety evaluations back-end service, which provisions a GPT model specific to harms evaluation that can generate content risk severity scores and reasoning to enable the safety evaluators.
176176

177177
#### Region support
178178

@@ -366,7 +366,7 @@ assistant: {{response}}
366366
output:
367367
```
368368

369-
You can create your own prompty-based evaluator and run it on a row of data:
369+
You can create your own Prompty-based evaluator and run it on a row of data:
370370

371371
```python
372372
with open("apology.prompty") as fin:
@@ -548,7 +548,7 @@ result = evaluate(
548548

549549
If you have a list of queries that you'd like to run then evaluate, the `evaluate()` also supports a `target` parameter, which can send queries to an application to collect answers then run your evaluators on the resulting query and response.
550550

551-
A target can be any callable class in your directory. In this case we have a python script `askwiki.py` with a callable class `askwiki()` that we can set as our target. Given a dataset of queries we can send into our simple `askwiki` app, we can evaluate the relevance of the outputs. Ensure you specify the proper column mapping for your data in `"column_mapping"`. You can use `"default"` to specify column mapping for all evaluators.
551+
A target can be any callable class in your directory. In this case we have a Python script `askwiki.py` with a callable class `askwiki()` that we can set as our target. Given a dataset of queries we can send into our simple `askwiki` app, we can evaluate the relevance of the outputs. Ensure you specify the proper column mapping for your data in `"column_mapping"`. You can use `"default"` to specify column mapping for all evaluators.
552552

553553
```python
554554
from askwiki import askwiki

0 commit comments

Comments
 (0)