You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/concepts/metrics/available_metrics/general_purpose.md
+50-17Lines changed: 50 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -103,26 +103,59 @@ Output
103
103
104
104
## Instance Specific rubrics criteria scoring
105
105
106
-
Instance specific evaluation metric is a rubric-based evaluation metric that is used to evaluate responses on a specific instance, ie each instance to be evaluated is annotated with a rubric based evaluation criteria. The rubric consists of descriptions for each score, typically ranging from 1 to 5. The response here is evaluation and scored using the LLM using description specified in the rubric. This metric also have reference free and reference based variations. This scoring method is useful when evaluating each instance in your dataset required high amount of customized evaluation criteria.
106
+
Instance Specific Evaluation Metric is a rubric-based method used to evaluate each item in a dataset individually. To use this metric, you need to provide a rubric along with the items you want to evaluate.
107
+
108
+
!!! note
109
+
This differs from the `Rubric Based Criteria Scoring Metric`, where a single rubric is applied to uniformly evaluate all items in the dataset. In the `Instance-Specific Evaluation Metric`, you decide which rubric to use for each item. It's like the difference between giving the entire class the same quiz (rubric-based) and creating a personalized quiz for each student (instance-specific).
107
110
108
111
#### Example
109
112
```python
110
-
from ragas.dataset_schema import SingleTurnSample
111
-
from ragas.metrics import InstanceRubrics
112
-
113
-
114
-
sample = SingleTurnSample(
115
-
user_input="Where is the Eiffel Tower located?",
116
-
response="The Eiffel Tower is located in Paris.",
117
-
rubrics= {
118
-
"score1": "The response is completely incorrect or unrelated to the question (e.g., 'The Eiffel Tower is in New York.' or talking about something entirely irrelevant).",
119
-
"score2": "The response is partially correct but vague or incorrect in key aspects (e.g., 'The Eiffel Tower is in France.' without mentioning Paris, or a similar incomplete location).",
120
-
"score3": "The response provides the correct location but with some factual inaccuracies or awkward phrasing (e.g., 'The Eiffel Tower is in Paris, Germany.' or 'It is located in Paris, which is a country.').",
121
-
"score4": "The response is accurate, providing the correct answer but lacking precision or extra context (e.g., 'The Eiffel Tower is in Paris, France.' or a minor phrasing issue).",
122
-
"score5": "The response is entirely accurate and clear, correctly stating the location as Paris without any factual errors or awkward phrasing (e.g., 'The Eiffel Tower is located in Paris.')."
123
-
}
113
+
dataset = [
114
+
# Relevance to Query
115
+
{
116
+
"user_query": "How do I handle exceptions in Python?",
117
+
"response": "To handle exceptions in Python, use the `try` and `except` blocks to catch and handle errors.",
118
+
"reference": "Proper error handling in Python involves using `try`, `except`, and optionally `else` and `finally` blocks to handle specific exceptions or perform cleanup tasks.",
119
+
"rubrics": {
120
+
"score0_description": "The response is off-topic or irrelevant to the user query.",
121
+
"score1_description": "The response is fully relevant and focused on the user query.",
122
+
},
123
+
},
124
+
# Code Efficiency
125
+
{
126
+
"user_query": "How can I create a list of squares for numbers 1 through 5 in Python?",
127
+
"response": """
128
+
# Using a for loop
129
+
squares = []
130
+
for i in range(1, 6):
131
+
squares.append(i ** 2)
132
+
print(squares)
133
+
""",
134
+
"reference": """
135
+
# Using a list comprehension
136
+
squares = [i ** 2 for i in range(1, 6)]
137
+
print(squares)
138
+
""",
139
+
"rubrics": {
140
+
"score0_description": "The code is inefficient and has obvious performance issues (e.g., unnecessary loops or redundant calculations).",
141
+
"score1_description": "The code is efficient, optimized, and performs well even with larger inputs.",
0 commit comments