Skip to content

Commit 50d5024

Browse files
committed
fix parameters
1 parent 07ed1a9 commit 50d5024

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -404,24 +404,24 @@ The `evaluate()` API only accepts data in the JSONLines format. For all built-in
404404

405405
When passing in your built-in evaluators, it's important to specify the right keyword mapping in the `evaluators` parameter list. The following is the keyword mapping required for the results from your built-in evaluators to show up in the UI when logged to Azure AI Studio.
406406

407-
| Evaluator | keyword param |
408-
|------------------------------|-----------------------|
409-
| `RelevanceEvaluator` | "relevance" |
410-
| `CoherenceEvaluator` | "coherence" |
411-
| `GroundednessEvaluator` | "groundedness" |
412-
| `FluencyEvaluator` | "fluency" |
413-
| `SimilarityEvaluator` | "similarity" |
414-
| `F1ScoreEvaluator` | "f1_score" |
415-
| `RougeScoreEvaluator` | "rouge_score" |
416-
| `GleuScoreEvaluator` | "gleu_score" |
417-
| `BleuScoreEvaluator` | "bleu_score" |
418-
| `MeteorScoreEvaluator` | "meteor_score" |
419-
| `ViolenceEvaluator` | "violence" |
420-
| `SexualEvaluator` | "sexual" |
421-
| `SelfHarmEvaluator` | "self_harm" |
422-
| `HateUnfairnessEvaluator` | "hate_unfairness" |
423-
| `QAEvaluator` | "qa" |
424-
| `ContentSafetyEvaluator` | "content_safety" |
407+
| Evaluator | keyword param |
408+
|---------------------------|-------------------|
409+
| `RelevanceEvaluator` | "relevance" |
410+
| `CoherenceEvaluator` | "coherence" |
411+
| `GroundednessEvaluator` | "groundedness" |
412+
| `FluencyEvaluator` | "fluency" |
413+
| `SimilarityEvaluator` | "similarity" |
414+
| `F1ScoreEvaluator` | "f1_score" |
415+
| `RougeScoreEvaluator` | "rouge" |
416+
| `GleuScoreEvaluator` | "gleu" |
417+
| `BleuScoreEvaluator` | "bleu" |
418+
| `MeteorScoreEvaluator` | "meteor" |
419+
| `ViolenceEvaluator` | "violence" |
420+
| `SexualEvaluator` | "sexual" |
421+
| `SelfHarmEvaluator` | "self_harm" |
422+
| `HateUnfairnessEvaluator` | "hate_unfairness" |
423+
| `QAEvaluator` | "qa" |
424+
| `ContentSafetyEvaluator` | "content_safety" |
425425

426426
Here's an example of setting the `evaluators` parameters:
427427
```python

0 commit comments

Comments
 (0)