Skip to content

An 'answer_relevancy' error occurred. #1417

@TakutoIyanagi-littletree

Description

  • I checked the documentation and related resources and couldn't find an answer to my question.

Your Question
The following error occurred
NotImplementedError: adapt() is not implemented for answer_relevancy metric

Code Examples

from ragas.llms import llm_factory
from ragas import evaluate
from datasets import Dataset
from ragas.metrics._answer_relevance import AnswerRelevancy
from ragas.metrics._answer_relevance import ResponseRelevancy

# Ragasの評価用LLMの設定
LLM_NAME = "gpt-4o-mini"
ragas_llm = llm_factory(model=LLM_NAME)

# Adapt the ResponseRelevancy metric to Japanese
answer_relevancy = ResponseRelevancy()
answer_relevancy.adapt(language="japanese")

dataset = Dataset.from_dict(
    {
        "question": questions,
        "answer": answers,
        "contexts": contexts,
        "ground_truth": ground_truths,
    }
)

score = evaluate(
    dataset,
    llm=ragas_llm,
    metrics=[
        answer_relevancy,
    ],
)
print(score)
score.to_pandas()

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingmodule-metricsthis is part of metrics modulequestionFurther information is requestedstaleIssue has not had recent activity or appears to be solved. Stale issues will be automatically closed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions