Skip to content

fix: propagate optimizer_model to algorithm#2424

Open
Redtius wants to merge 1 commit intoconfident-ai:mainfrom
Redtius:fix/mipro-optimizer-model-and-validation
Open

fix: propagate optimizer_model to algorithm#2424
Redtius wants to merge 1 commit intoconfident-ai:mainfrom
Redtius:fix/mipro-optimizer-model-and-validation

Conversation

@Redtius
Copy link

@Redtius Redtius commented Jan 9, 2026

Description

Hello! I've been experimenting with DeepEval lately—y'all have done a very good job on this project.
However, I noticed a blocker while experimenting with MIPROv2, especially when using a custom LLM implementing DeepEvalBaseLLM. Even though I passed the optimizer_model to the PromptOptimizer constructor, I ran into the following exception:
[MIPROv2] • error DeepEvalError: MIPROv2 requires an optimizer_model for instruction proposal. Set it via PromptOptimizer. • halted before first iteration

Root Cause

After digging into the source, I found that self.algorithm.optimizer_model was remaining None because it wasn't being assigned during the configuration step in PromptOptimizer. Since MIPROv2 relies on a proposer phase before the main optimization loop, it needs this model to be explicitly set on the algorithm instance.
I’ve submitted this PR as a quick fix to ensure the model is correctly propagated. If you have a better architectural idea for handling this assignment, I'd be glad to help adjust it!

Reproductible Case

I prepared a mock case that reproduces the issue without requiring any external API keys or complex setups:

from deepeval.models.base_model import DeepEvalBaseLLM
from deepeval.optimizer import PromptOptimizer
from deepeval.optimizer.algorithms import MIPROV2
from deepeval.dataset import Golden
from deepeval.prompt import Prompt
from deepeval.metrics import BaseMetric
from deepeval.test_case import LLMTestCase

class MockLLM(DeepEvalBaseLLM):
    def __init__(self):
        pass
    def load_model(self):
        return self
    def generate(self, prompt: str) -> str:
        return "response"
    async def a_generate(self, prompt: str) -> str:
        return "response"
    def get_model_name(self):
        return "LLM"

mock_model = MockLLM()
goldens = [Golden(input="Hi", expected_output="Hello"),Golden(input="Hru?", expected_output="Fine, U?")]
prompt = Prompt(text_template="You are a mock assistant.")

class MockMetric(BaseMetric):
    def __init__(self):
        self.threshold = 0.5
    def measure(self, test_case: LLMTestCase):
        self.score = 1.0
        return self.score
    async def a_measure(self, test_case: LLMTestCase):
        return self.measure(test_case)
    def is_successful(self):
        return True
    @property
    def __name__(self):
        return "Metric"

def model_callback(prompt: Prompt,thread_id: str) -> str:
    return "Mocked output"

optimizer = PromptOptimizer(
    algorithm=MIPROV2(),
    optimizer_model=mock_model,
    model_callback=model_callback,
    metrics=[MockMetric()]
)
try:
    result = optimizer.optimize(prompt=prompt, goldens=goldens)
    print(result.text_template)
except Exception as e:
    print(f"\nBUG: {e}")
    print("\nEvidence: Even though 'optimizer_model' was passed to PromptOptimizer,")
    print(f"the algorithm instance has optimizer_model = {optimizer.algorithm.optimizer_model}")

Bug That prevents MIPROv2 from working with custom LLMs
@vercel
Copy link

vercel bot commented Jan 9, 2026

Someone is attempting to deploy a commit to the Confident AI Team on Vercel.

A member of the Team first needs to authorize it.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 9, 2026

PR author is not in the allowed authors list.

@Redtius
Copy link
Author

Redtius commented Jan 26, 2026

@penguine-ip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant