Skip to content

Commit 1515fd3

Browse files
authored
Merge pull request #2133 from MicrosoftDocs/main
12/30/2024 AM Publish
2 parents c18cb3f + 132bff7 commit 1515fd3

File tree

2 files changed

+4
-6
lines changed

2 files changed

+4
-6
lines changed

articles/ai-studio/concepts/evaluation-approach-gen-ai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: conceptual
12-
ms.date: 5/21/2024
12+
ms.date: 12/23/2024
1313
ms.reviewer: mithigpe
1414
ms.author: lagayhar
1515
author: lgayhardt

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -444,20 +444,18 @@ Sometimes a large language model isn't needed for certain evaluation metrics. Th
444444
class AnswerLengthEvaluator:
445445
def __init__(self):
446446
pass
447-
447+
# A class is made a callable my implementing the special method __call__
448448
def __call__(self, *, answer: str, **kwargs):
449449
return {"answer_length": len(answer)}
450450
```
451451

452452
Then run the evaluator on a row of data by importing a callable class:
453453

454454
```python
455-
with open("answer_len/answer_length.py") as fin:
456-
print(fin.read())
457-
458455
from answer_len.answer_length import AnswerLengthEvaluator
459456

460-
answer_length = AnswerLengthEvaluator()(answer="What is the speed of light?")
457+
answer_length_evaluator = AnswerLengthEvaluator()
458+
answer_length = answer_length_evaluator(answer="What is the speed of light?")
461459

462460
print(answer_length)
463461
```

0 commit comments

Comments
 (0)