Skip to content

Commit c45423d

Browse files
Update GEPA's reflection_lm (#8633)
* Update reflection_lm * Ruff
1 parent 8f773dc commit c45423d

File tree

1 file changed

+5
-4
lines changed

1 file changed

+5
-4
lines changed

dspy/teleprompt/gepa/gepa.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@
66
from gepa import GEPAResult, optimize
77

88
from dspy.clients.lm import LM
9-
from dspy.dsp.utils.settings import settings
109
from dspy.primitives import Example, Module, Prediction
1110
from dspy.teleprompt.teleprompt import Teleprompter
1211

@@ -199,7 +198,7 @@ def metric(
199198
Reflection based configuration:
200199
- reflection_minibatch_size: The number of examples to use for reflection in a single GEPA step.
201200
- candidate_selection_strategy: The strategy to use for candidate selection. Default is "pareto", which stochastically selects candidates from the Pareto frontier of all validation scores.
202-
- reflection_lm: The language model to use for reflection. If not provided, student's LM is used, or dspy.settings.lm is used.
201+
- reflection_lm: [Required] The language model to use for reflection. GEPA benefits from a strong reflection model, and you can use `dspy.LM(model='gpt-5', temperature=1.0, max_tokens=32000)` to get a good reflection model.
203202
204203
Merge-based configuration:
205204
- use_merge: Whether to use merge-based optimization. Default is True.
@@ -273,7 +272,9 @@ def __init__(
273272
# Reflection based configuration
274273
self.reflection_minibatch_size = reflection_minibatch_size
275274
self.candidate_selection_strategy = candidate_selection_strategy
276-
self.reflection_lm = reflection_lm
275+
# self.reflection_lm = reflection_lm
276+
assert reflection_lm is not None, "GEPA requires a reflection language model to be provided. Typically, you can use `dspy.LM(model='gpt-5', temperature=1.0, max_tokens=32000)` to get a good reflection model. Reflection LM is used by GEPA to reflect on the behavior of the program and propose new instructions, and will benefit from a strong model."
277+
self.reflection_lm = lambda x: reflection_lm(x)[0]
277278
self.skip_perfect_score = skip_perfect_score
278279
self.add_format_failure_as_feedback = add_format_failure_as_feedback
279280

@@ -409,7 +410,7 @@ def feedback_fn(
409410
rng=rng,
410411
)
411412

412-
reflection_lm = lambda x: (self.reflection_lm or settings.lm or student.get_lm())(x)[0]
413+
reflection_lm = self.reflection_lm
413414

414415
# Instantiate GEPA with the simpler adapter-based API
415416
base_program = {name: pred.signature.instructions for name, pred in student.named_predictors()}

0 commit comments

Comments
 (0)