Skip to content

Commit c4579e1

Browse files
authored
Fix issue #69 (#73)
Fix issue #69 where LLMs other than OpenAI APIs are not being called
1 parent 2618ab6 commit c4579e1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

continuous_eval/metrics/generation/text/llm_based.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ def __call__(self, answer: str, retrieved_context: List[str], question: str, **k
3636
if self.classify_by_statement:
3737
# Context coverage uses the same prompt as faithfulness because it calculates how what proportion statements in the answer can be attributed to the context.
3838
# The difference is that faithfulness uses the generated answer, while context coverage uses ground truth answer (to evaluate context).
39-
context_coverage = LLMBasedContextCoverage(use_few_shot=self.use_few_shot)
39+
context_coverage = LLMBasedContextCoverage(model=self._llm, use_few_shot=self.use_few_shot)
4040
results = context_coverage(question, retrieved_context, answer)
4141
score = results["LLM_based_context_coverage"]
4242
reasoning = results["LLM_based_context_statements"]

0 commit comments

Comments
 (0)