Skip to content

Commit 38f6214

Browse files
committed
minor updates for clarity
1 parent ef91f7b commit 38f6214

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-studio/how-to/develop/evaluate-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -269,7 +269,7 @@ Built-in evaluators are great out of the box to start evaluating your applicatio
269269

270270
### Code-based evaluators
271271

272-
Sometimes a large language model isn't needed for certain evaluation metrics. This is when code-based evaluators can give you the flexibility to define metrics based on functions or callable class. You can create your own code-based evaluator, for example, with a simple Python class that calculates the length of an answer in `answer_length.py` under directory `answer_len/`:
272+
Sometimes a large language model isn't needed for certain evaluation metrics. This is when code-based evaluators can give you the flexibility to define metrics based on functions or callable class. You can build your own code-based evaluator, for example, by creating a simple Python class that calculates the length of an answer in `answer_length.py` under directory `answer_len/`:
273273

274274
```python
275275
class AnswerLengthEvaluator:
@@ -279,7 +279,7 @@ class AnswerLengthEvaluator:
279279
def __call__(self, *, answer: str, **kwargs):
280280
return {"answer_length": len(answer)}
281281
```
282-
Then run the evalutor on a row of data by importing a callable class:
282+
Then run the evaluator on a row of data by importing a callable class:
283283

284284
```python
285285
with open("answer_len/answer_length.py") as fin:

0 commit comments

Comments
 (0)