Skip to content

Commit e91a672

Browse files
authored
Docs improvements (#1808)
Issue: Differences between tutorials and source code in **General Purpose Metrics** link : https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/general_purpose/ Two updates: 1. Changing library calls before ```python from ragas.metrics import InstanceRubricsScore ... scorer = InstanceRubricsScore(llm=evaluator_llm) ``` after ```python from ragas.metrics import InstanceRubrics ... scorer = InstanceRubrics(llm=evaluator_llm) ``` 2. Sample define before ``` SingleTurnSample( ``` after ``` sample = SingleTurnSample( ```
1 parent 23e8b4c commit e91a672

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

docs/concepts/metrics/available_metrics/general_purpose.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -98,10 +98,10 @@ Instance specific evaluation metric is a rubric-based evaluation metric that is
9898
#### Example
9999
```python
100100
from ragas.dataset_schema import SingleTurnSample
101-
from ragas.metrics import InstanceRubricsScore
101+
from ragas.metrics import InstanceRubrics
102102

103103

104-
SingleTurnSample(
104+
sample = SingleTurnSample(
105105
user_input="Where is the Eiffel Tower located?",
106106
response="The Eiffel Tower is located in Paris.",
107107
rubrics = {
@@ -113,6 +113,6 @@ SingleTurnSample(
113113
}
114114
)
115115

116-
scorer = InstanceRubricsScore(llm=evaluator_llm)
116+
scorer = InstanceRubrics(llm=evaluator_llm)
117117
await scorer.single_turn_ascore(sample)
118118
```

0 commit comments

Comments
 (0)