You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the latest `o3` and `o4-mini` models with the [Responses API](./responses.md) you can use the reasoning summary parameter to receive summaries of the model's chain of thought reasoning. This parameter can be set to `auto`, `concise`, or `detailed`. Access to this feature requires you to [Request Access](https://aka.ms/oai/o3access).
387
+
388
+
> [!NOTE]
389
+
> Even when enabled, reasoning summaries are not generated for every step/request. Based on current testing, it is expected for reasoning summaries to not be generated for about 20% of requests.
390
+
391
+
# [Python](#tab/py)
392
+
393
+
You'll need to upgrade your OpenAI client library for access to the latest parameters.
394
+
395
+
```cmd
396
+
pip install openai --upgrade
397
+
```
398
+
399
+
```python
400
+
from openai import AzureOpenAI
401
+
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
"text": "**Summarizing neural text degeneration**\n\nThe user's asking about \"The Curious Case of Neural Text Degeneration,\" a paper by Ari Holtzman et al. from 2020. It explains how certain decoding strategies produce repetitive and dull text. In contrast, methods like nucleus sampling yield more coherent and diverse outputs. The authors introduce metrics like surprisal and distinct-n for evaluation and suggest that maximum likelihood decoding often favors generic continuations, leading to loops and repetitive patterns in longer texts. They promote sampling from truncated distributions for improved text quality.",
456
+
"type": "summary_text"
457
+
},
458
+
{
459
+
"text": "**Explaining nucleus sampling**\n\nThe authors propose nucleus sampling, which captures a specified mass of the predictive distribution, improving metrics such as coherence and diversity. They identify a \"sudden drop\" phenomenon in token probabilities, where a few tokens dominate, leading to a long tail. By truncating this at a cumulative probability threshold, they aim to enhance text quality compared to top-k sampling. Their evaluations include human assessments, showing better results in terms of BLEU scores and distinct-n measures. Overall, they highlight how decoding strategies influence quality and recommend adaptive techniques for improved outcomes.",
"text": "Researchers first became aware that neural language models, when used to generate long stretches of text with standard “maximum‐likelihood” decoding (greedy search, beam search, etc.), often produce bland, repetitive or looping output. The 2020 paper “The Curious Case of Neural Text Degeneration” (Holtzman et al.) analyzes this failure mode and proposes a simple fix—nucleus (top‑p) sampling—that dramatically improves output quality.\n\n1. The Problem: Degeneration \n • With greedy or beam search, models tend to pick very high‑probability tokens over and over, leading to loops (“the the the…”) or generic, dull continuations. \n • Even sampling with a fixed top‑k (e.g. always sample from the 40 most likely tokens) can be suboptimal: if the model’s probability mass is skewed, k may be too small (overly repetitive) or too large (introducing incoherence).\n\n2. Why It Happens: Distributional Peakedness \n • At each time step the model’s predicted next‐token distribution often has one or two very high‑probability tokens, then a long tail of low‑probability tokens. \n • Maximum‐likelihood decoding zeroes in on the peak, collapsing diversity. \n • Uniform sampling over a large k allows low‑probability “wild” tokens, harming coherence.\n\n3. The Fix: Nucleus (Top‑p) Sampling \n • Rather than fixing k, dynamically truncate the distribution to the smallest set of tokens whose cumulative probability ≥ p (e.g. p=0.9). \n • Then renormalize and sample from that “nucleus.” \n • This keeps only the “plausible” mass and discards the improbable tail, adapting to each context.\n\n4. Empirical Findings \n • Automatic metrics (distinct‑n, repetition rates) and human evaluations show nucleus sampling yields more diverse, coherent, on‑topic text than greedy/beam or fixed top‑k. \n • It also outperforms simple temperature scaling (raising logits to 1/T) because it adapts to changes in the distribution’s shape.\n\n5. Takeaways for Practitioners \n • Don’t default to beam search for open-ended generation—its high likelihood doesn’t mean high quality. \n • Use nucleus sampling (p between 0.8 and 0.95) for a balance of diversity and coherence. \n • Monitor repetition and distinct‑n scores if you need automatic sanity checks.\n\nIn short, “neural text degeneration” is the tendency of likelihood‐maximizing decoders to produce dull or looping text. By recognizing that the shape of the model’s probability distribution varies wildly from step to step, nucleus sampling provides an elegant, adaptive way to maintain both coherence and diversity in generated text.",
472
+
"type": "output_text"
473
+
}
474
+
],
475
+
"role": "assistant",
476
+
"status": "completed",
477
+
"type": "message"
478
+
}
479
+
],
480
+
"parallel_tool_calls": true,
481
+
"temperature": 1.0,
482
+
"tool_choice": "auto",
483
+
"tools": [],
484
+
"top_p": 1.0,
485
+
"max_output_tokens": null,
486
+
"previous_response_id": null,
487
+
"reasoning": {
488
+
"effort": "medium",
489
+
"generate_summary": null,
490
+
"summary": "detailed"
491
+
},
492
+
"status": "completed",
493
+
"text": {
494
+
"format": {
495
+
"type": "text"
496
+
}
497
+
},
498
+
"truncation": "disabled",
499
+
"usage": {
500
+
"input_tokens": 16,
501
+
"output_tokens": 974,
502
+
"output_tokens_details": {
503
+
"reasoning_tokens": 384
504
+
},
505
+
"total_tokens": 990,
506
+
"input_tokens_details": {
507
+
"cached_tokens": 0
508
+
}
509
+
},
510
+
"user": null,
511
+
"store": true
512
+
}
513
+
```
514
+
384
515
## Markdown output
385
516
386
517
By default the `o3-mini` and `o1` models will not attempt to produce output that includes markdown formatting. A common use case where this behavior is undesirable is when you want the model to output code contained within a markdown code block. When the model generates output without markdown formatting you lose features like syntax highlighting, and copyable code blocks in interactive playground experiences. To override this new default behavior and encourage markdown inclusion in model responses, add the string `Formatting re-enabled` to the beginning of your developer message.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/responses.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,6 +46,8 @@ The responses API is currently available in the following regions:
46
46
-`gpt-4.1` (Version: `2025-04-14`)
47
47
-`gpt-4.1-nano` (Version: `2025-04-14`)
48
48
-`gpt-4.1-mini` (Version: `2025-04-14`)
49
+
-`o3` (Version: `2025-04-16`)
50
+
-`o4-mini` (Version: `2025-04-16`)
49
51
50
52
Not every model is available in the regions supported by the responses API. Check the [models page](../concepts/models.md) for model region availability.
0 commit comments