-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Closed
Labels
bugSomething isn't workingSomething isn't workingmodule-metricsthis is part of metrics modulethis is part of metrics module
Description
GitHub Issue Title:
Problem with answer_relevancy Metric in Ragas Framework
Description:
Hello,
I am experiencing an issue with the Ragas evaluation framework when using the answer_relevancy metric. Other metrics work fine, but the answer_relevancy metric causes an error. Below is a snippet of the code I am using and the corresponding error message.
Code:
from ragas import evaluate
from ragas.metrics import (
faithfulness,
answer_relevancy,
context_recall,
context_precision,
answer_similarity,
answer_correctness
)
result = evaluate(
dataset = dataset,
llm = llm,
embeddings = embeddings,
metrics = [
faithfulness,
answer_relevancy,
context_recall,
context_precision,
answer_similarity,
answer_correctness
],
)
resultError Message:
BadRequestError: Error code: 400 - {'object': 'error', 'message': 'best_of must be 1 when using greedy sampling.Got 3.', 'type': 'BadRequestError', 'param': None, 'code': 400}
It seems that the error is related to a BadRequestError with a message indicating that best_of must be 1 when using greedy sampling. I have verified that I am not explicitly setting the best_of parameter.
Any guidance on how to resolve this issue would be greatly appreciated.
Thank you!
dosubot and nmilojevic23
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingmodule-metricsthis is part of metrics modulethis is part of metrics module