Skip to content

Not able to calculate noise_sensitivity_relevant using Azure Open AI.Β #1283

@wanjeakshay

Description

@wanjeakshay

[ ] I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug
I am able to calculate 4 metrics {'faithfulness': 1.0000, 'answer_relevancy': 0.9880, 'context_recall': 1.0000, 'context_precision': 1.0000}
but when I tried to calculate noise_sensitivity_relevant it is saying "ground_truth" column is not present
but I have ground_truth column in my dataset

Ragas version: 0.1.18
Python version: 3.10.12

Code to Reproduce
from ragas.metrics import noise_sensitivity_relevant
metrics = [
noise_sensitivity_relevant,
]
print(dataset)
result_2 = evaluate(
dataset, metrics=metrics, llm=azure_model, embeddings=azure_embeddings
)

Error trace
Dataset({
features: ['question', 'ground_truth', 'answer', 'contexts', 'retreived_contexts'],
num_rows: 1
})

ValueError Traceback (most recent call last)
in <cell line: 6>()
4 ]
5 print(dataset)
----> 6 result_2 = evaluate(
7 dataset, metrics=metrics, llm=azure_model, embeddings=azure_embeddings
8 )

2 frames
/usr/local/lib/python3.10/dist-packages/ragas/_analytics.py in wrapper(*args, **kwargs)
127 def wrapper(*args: P.args, **kwargs: P.kwargs) -> t.Any:
128 track(IsCompleteEvent(event_type=func.name, is_completed=False))
--> 129 result = func(*args, **kwargs)
130 track(IsCompleteEvent(event_type=func.name, is_completed=True))
131

/usr/local/lib/python3.10/dist-packages/ragas/evaluation.py in evaluate(dataset, metrics, llm, embeddings, callbacks, in_ci, run_config, token_usage_parser, raise_exceptions, column_map)
175
176 if isinstance(dataset, EvaluationDataset):
--> 177 validate_required_columns(dataset, metrics)
178 validate_supported_metrics(dataset, metrics)
179

/usr/local/lib/python3.10/dist-packages/ragas/validation.py in validate_required_columns(ds, metrics)
60 available_columns = ds.features()
61 if not required_columns.issubset(available_columns):
---> 62 raise ValueError(
63 f"The metric [{m.name}] that that is used requires the following "
64 f"additional columns {list(required_columns - available_columns)} "

ValueError: The metric [noise_sensitivity_relevant] that that is used requires the following additional columns ['ground_truth'] to be present in the dataset.

Expected behavior
It should return the result

Additional context
Add any other context about the problem here.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions