-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
[x] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
Both LangchainLLMWrapper.generate_text and LangchainLLMWrapper.agenerate_text sets the inner langchain_llm's temperature to the method's input afterwards
Ragas version: 0.2.15
Python version: 3.13.3
Code to Reproduce
ragas/ragas/src/ragas/llms/base.py
Lines 195 to 233 in 0773595
| def generate_text( | |
| self, | |
| prompt: PromptValue, | |
| n: int = 1, | |
| temperature: t.Optional[float] = None, | |
| stop: t.Optional[t.List[str]] = None, | |
| callbacks: Callbacks = None, | |
| ) -> LLMResult: | |
| # figure out the temperature to set | |
| old_temperature: float | None = None | |
| if temperature is None: | |
| temperature = self.get_temperature(n=n) | |
| if hasattr(self.langchain_llm, "temperature"): | |
| self.langchain_llm.temperature = temperature # type: ignore | |
| old_temperature = temperature | |
| if is_multiple_completion_supported(self.langchain_llm): | |
| result = self.langchain_llm.generate_prompt( | |
| prompts=[prompt], | |
| n=n, | |
| stop=stop, | |
| callbacks=callbacks, | |
| ) | |
| else: | |
| result = self.langchain_llm.generate_prompt( | |
| prompts=[prompt] * n, | |
| stop=stop, | |
| callbacks=callbacks, | |
| ) | |
| # make LLMResult.generation appear as if it was n_completions | |
| # note that LLMResult.runs is still a list that represents each run | |
| generations = [[g[0] for g in result.generations]] | |
| result.generations = generations | |
| # reset the temperature to the original value | |
| if old_temperature is not None: | |
| self.langchain_llm.temperature = old_temperature # type: ignore | |
| return result |
old_temperature does not actually store the original temperature of the inner langchain_llm, it is storing the result of temperature or self.get_temperature(n=n).
Also, why is self.get_temperature(n=n) used when temperature is None, instead of respecting the setting of langchain_llm and use that instead?
Error trace
Expected behavior
A clear and concise description of what you expected to happen.
Additional context
Add any other context about the problem here.