Bug: top_p incorrectly sent for gpt-5.2 with reasoning effort ≠ ‘none’ (OpenAIResponses) → 400 Unsupported parameter #20486
Replies: 2 comments
-
|
Good catch and clean proposed fix. The reasoning models have different parameter constraints that are easy to miss. Additional considerations:
Pattern we use: REASONING_INCOMPATIBLE_PARAMS = {"top_p", "temperature", "frequency_penalty"}
def sanitize_for_reasoning(kwargs, model, effort):
if is_reasoning_mode(model, effort):
stripped = {k: v for k, v in kwargs.items()
if k not in REASONING_INCOMPATIBLE_PARAMS}
if len(stripped) != len(kwargs):
logger.warning(f"Stripped params for reasoning mode: {...}")
return stripped
return kwargsWe hit similar issues at Revolution AI when GPT-5 rolled out — maintaining a compatibility matrix helps. +1 for the fix! |
Beta Was this translation helpful? Give feedback.
-
|
Good catch! GPT-5.2 reasoning mode has different parameter constraints. Workaround until patched: from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="gpt-5.2",
reasoning={"effort": "medium"},
# Explicitly omit top_p
additional_kwargs={
# Do NOT include top_p here
}
)
# Or override in the call
response = llm.complete(
prompt,
# Remove top_p from kwargs
)Alternative: Subclass to strip params class GPT52ReasoningLLM(OpenAI):
def _prepare_kwargs(self, **kwargs):
if self.reasoning and self.reasoning.get("effort") != "none":
kwargs.pop("top_p", None)
kwargs.pop("temperature", None) # Also restricted
return super()._prepare_kwargs(**kwargs)Your fix looks correct: def _should_strip_sampling_params(self, model_kwargs):
# ...
if effort and str(effort).lower() != "none":
return True
return False
# Strip both:
if self._should_strip_sampling_params(kwargs):
kwargs.pop("top_p", None)
kwargs.pop("temperature", None)We hit this deploying GPT-5.2 at Revolution AI — the reasoning mode parameter restrictions need better docs. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team, I’m seeing a consistent 400 error when using OpenAIResponses with gpt-5.2 and any reasoning.effort value other than "none".
What happens:
When reasoning.effort is set (e.g., "low", "medium", etc.), requests fail with:
BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'top_p' is not supported with this model.", 'type': 'invalid_request_error', 'param': 'top_p', 'code': None}}It looks like top_p is being included in the request payload (via model kwargs), but this model configuration rejects it.
https://platform.openai.com/docs/guides/latest-model#gpt-5-2-parameter-compatibility
Expected behavior
For gpt-5.2 + reasoning effort ≠ none, top_p should be omitted from the request (or stripped from model_kwargs) to avoid sending unsupported parameters.
Proposed fix
I have a patch that conditionally strips top_p when:
• the model name starts with gpt-5.2, and
• reasoning.effort is set and is not "none"
Proposed helper:
Then, when this returns True, the request builder should avoid including top_p (e.g., remove it from model_kwargs before constructing the request).
Beta Was this translation helpful? Give feedback.
All reactions