Skip to content

Problem with answer_relevancy Metric in Ragas FrameworkΒ #1162

@huangxuyh

Description

@huangxuyh

GitHub Issue Title:

Problem with answer_relevancy Metric in Ragas Framework

Description:

Hello,

I am experiencing an issue with the Ragas evaluation framework when using the answer_relevancy metric. Other metrics work fine, but the answer_relevancy metric causes an error. Below is a snippet of the code I am using and the corresponding error message.

Code:

from ragas import evaluate
from ragas.metrics import (
   faithfulness,
   answer_relevancy,
   context_recall,
   context_precision,
   answer_similarity,
   answer_correctness
)

result = evaluate(
    dataset = dataset,
    llm = llm,
    embeddings = embeddings,
    metrics = [
        faithfulness,
        answer_relevancy,
        context_recall,
        context_precision,
        answer_similarity,
        answer_correctness
    ],
)
result

Error Message:

BadRequestError: Error code: 400 - {'object': 'error', 'message': 'best_of must be 1 when using greedy sampling.Got 3.', 'type': 'BadRequestError', 'param': None, 'code': 400}

It seems that the error is related to a BadRequestError with a message indicating that best_of must be 1 when using greedy sampling. I have verified that I am not explicitly setting the best_of parameter.

Any guidance on how to resolve this issue would be greatly appreciated.

Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingmodule-metricsthis is part of metrics module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions