fix: propagate optimizer_model to algorithm#2424
Open
Redtius wants to merge 1 commit intoconfident-ai:mainfrom
Open
fix: propagate optimizer_model to algorithm#2424Redtius wants to merge 1 commit intoconfident-ai:mainfrom
Redtius wants to merge 1 commit intoconfident-ai:mainfrom
Conversation
Bug That prevents MIPROv2 from working with custom LLMs
|
Someone is attempting to deploy a commit to the Confident AI Team on Vercel. A member of the Team first needs to authorize it. |
Contributor
|
PR author is not in the allowed authors list. |
Author
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Hello! I've been experimenting with DeepEval lately—y'all have done a very good job on this project.
However, I noticed a blocker while experimenting with MIPROv2, especially when using a custom LLM implementing DeepEvalBaseLLM. Even though I passed the optimizer_model to the PromptOptimizer constructor, I ran into the following exception:
[MIPROv2] • error DeepEvalError: MIPROv2 requires an optimizer_model for instruction proposal. Set it via PromptOptimizer. • halted before first iterationRoot Cause
After digging into the source, I found that self.algorithm.optimizer_model was remaining None because it wasn't being assigned during the configuration step in PromptOptimizer. Since MIPROv2 relies on a proposer phase before the main optimization loop, it needs this model to be explicitly set on the algorithm instance.
I’ve submitted this PR as a quick fix to ensure the model is correctly propagated. If you have a better architectural idea for handling this assignment, I'd be glad to help adjust it!
Reproductible Case
I prepared a mock case that reproduces the issue without requiring any external API keys or complex setups: