You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -62,7 +62,7 @@ Azure OpenAI reasoning models are designed to tackle reasoning and problem-solvi
62
62
| Streaming | ✅ | ✅ | ✅|
63
63
64
64
<sup>1</sup> Parallel tool calls are not supported when `reasoning_effort` is set to `minimal`<br><br>
65
-
<sup>2</sup> Reasoning models will only work with the `max_completion_tokens` parameter. <br><br>
65
+
<sup>2</sup> Reasoning models will only work with the `max_completion_tokens` parameter when using the Chat Completions API. Use `max_output_tokens` with the Responses API. <br><br>
66
66
<sup>3</sup> The latest reasoning models support system messages to make migration easier. You should not use both a developer message and a system message in the same API request.<br><br>
67
67
<sup>4</sup> Access to the chain-of-thought reasoning summary is limited access only for `o3` & `o4-mini`.
68
68
@@ -98,7 +98,7 @@ For more information, we also recommend reading OpenAI's [GPT-5 prompting cookbo
<sup>1</sup> Reasoning models will only work with the `max_completion_tokens` parameter. <br><br>
101
+
<sup>1</sup> Reasoning models will only work with the `max_completion_tokens` parameter when using the Chat Completions API. Use `max_output_tokens` with the Responses API.<br><br>
102
102
<sup>2</sup> The latest o<sup>*</sup> series model support system messages to make migration easier. When you use a system message with `o4-mini`, `o3`, `o3-mini`, and `o1` it will be treated as a developer message. You should not use both a developer message and a system message in the same API request.
103
103
<sup>3</sup> Access to the chain-of-thought reasoning summary is limited access only for `o3` & `o4-mini`.
104
104
<sup>4</sup> Streaming for `o3` is limited access only.
@@ -461,10 +461,13 @@ client = AzureOpenAI(
461
461
462
462
response = client.responses.create(
463
463
input="Tell me about the curious case of neural text degeneration",
464
-
model="o4-mini", # replace with model deployment name
464
+
model="gpt-5", # replace with model deployment name
465
465
reasoning={
466
466
"effort": "medium",
467
-
"summary": "detailed"# auto, concise, or detailed (currently only supported with o4-mini and o3)
467
+
"summary": "auto"# auto, concise, or detailed
468
+
}
469
+
text={
470
+
"verbosity": "low"# New with GPT-5 models
468
471
}
469
472
)
470
473
@@ -478,47 +481,41 @@ curl -X POST "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?ap
"text": "**Summarizing neural text degeneration**\n\nThe user's asking about \"The Curious Case of Neural Text Degeneration,\" a paper by Ari Holtzman et al. from 2020. It explains how certain decoding strategies produce repetitive and dull text. In contrast, methods like nucleus sampling yield more coherent and diverse outputs. The authors introduce metrics like surprisal and distinct-n for evaluation and suggest that maximum likelihood decoding often favors generic continuations, leading to loops and repetitive patterns in longer texts. They promote sampling from truncated distributions for improved text quality.",
505
-
"type": "summary_text"
506
-
},
507
-
{
508
-
"text": "**Explaining nucleus sampling**\n\nThe authors propose nucleus sampling, which captures a specified mass of the predictive distribution, improving metrics such as coherence and diversity. They identify a \"sudden drop\" phenomenon in token probabilities, where a few tokens dominate, leading to a long tail. By truncating this at a cumulative probability threshold, they aim to enhance text quality compared to top-k sampling. Their evaluations include human assessments, showing better results in terms of BLEU scores and distinct-n measures. Overall, they highlight how decoding strategies influence quality and recommend adaptive techniques for improved outcomes.",
"text": "Researchers first became aware that neural language models, when used to generate long stretches of text with standard “maximum‐likelihood” decoding (greedy search, beam search, etc.), often produce bland, repetitive or looping output. The 2020 paper “The Curious Case of Neural Text Degeneration” (Holtzman et al.) analyzes this failure mode and proposes a simple fix—nucleus (top‑p) sampling—that dramatically improves output quality.\n\n1. The Problem: Degeneration \n • With greedy or beam search, models tend to pick very high‑probability tokens over and over, leading to loops (“the the the…”) or generic, dull continuations. \n • Even sampling with a fixed top‑k (e.g. always sample from the 40 most likely tokens) can be suboptimal: if the model’s probability mass is skewed, k may be too small (overly repetitive) or too large (introducing incoherence).\n\n2. Why It Happens: Distributional Peakedness \n • At each time step the model’s predicted next‐token distribution often has one or two very high‑probability tokens, then a long tail of low‑probability tokens. \n • Maximum‐likelihood decoding zeroes in on the peak, collapsing diversity. \n • Uniform sampling over a large k allows low‑probability “wild” tokens, harming coherence.\n\n3. The Fix: Nucleus (Top‑p) Sampling \n • Rather than fixing k, dynamically truncate the distribution to the smallest set of tokens whose cumulative probability ≥ p (e.g. p=0.9). \n • Then renormalize and sample from that “nucleus.” \n • This keeps only the “plausible” mass and discards the improbable tail, adapting to each context.\n\n4. Empirical Findings \n • Automatic metrics (distinct‑n, repetition rates) and human evaluations show nucleus sampling yields more diverse, coherent, on‑topic text than greedy/beam or fixed top‑k. \n • It also outperforms simple temperature scaling (raising logits to 1/T) because it adapts to changes in the distribution’s shape.\n\n5. Takeaways for Practitioners \n • Don’t default to beam search for open-ended generation—its high likelihood doesn’t mean high quality. \n • Use nucleus sampling (p between 0.8 and 0.95) for a balance of diversity and coherence. \n • Monitor repetition and distinct‑n scores if you need automatic sanity checks.\n\nIn short, “neural text degeneration” is the tendency of likelihood‐maximizing decoders to produce dull or looping text. By recognizing that the shape of the model’s probability distribution varies wildly from step to step, nucleus sampling provides an elegant, adaptive way to maintain both coherence and diversity in generated text.",
521
-
"type": "output_text"
516
+
"text": "Neural text degeneration refers to the ways language models produce low-quality, repetitive, or vacuous text, especially when generating long outputs. It’s “curious” because models trained to imitate fluent text can still spiral into unnatural patterns. Key aspects:\n\n- Repetition and loops: The model repeats phrases or sentences (“I’m sorry, but...”), often due to high-confidence tokens reinforcing themselves.\n- Loss of specificity: Vague, generic, agreeable text that avoids concrete details.\n- Drift and contradiction: The output gradually departs from context or contradicts itself over long spans.\n- Exposure bias: During training, models see gold-standard prefixes; at inference, they must condition on their own imperfect outputs, compounding errors.\n- Likelihood vs. quality mismatch: Maximizing token-level likelihood doesn’t align with human preferences for diversity, coherence, or factuality.\n- Token over-optimization: Frequent, safe tokens get overused; certain phrases become attractors.\n- Entropy collapse: With greedy or low-temperature decoding, the distribution narrows too much, causing repetitive, low-entropy text.\n- Length and beam search issues: Larger beams or long generations can favor bland, repetitive sequences (the “likelihood trap”).\n\nCommon mitigations:\n\n- Decoding strategies:\n - Top-k, nucleus (top-p), or temperature sampling to keep sufficient entropy.\n - Typical sampling and locally typical sampling to avoid dull but high-probability tokens.\n - Repetition penalties, presence/frequency penalties, no-repeat n-grams.\n - Contrastive decoding (and variants like DoLa) to filter generic continuations.\n - Min/max length, stop sequences, and beam search with diversity/penalties.\n\n- Training and alignment:\n - RLHF/DPO to better match human preferences for non-repetitive, helpful text.\n - Supervised fine-tuning on high-quality, diverse data; instruction tuning.\n - Debiasing objectives (unlikelihood training) to penalize repetition and banned patterns.\n - Mixture-of-denoisers or latent planning to improve long-range coherence.\n\n- Architectural and planning aids:\n - Retrieval-augmented generation to ground outputs.\n - Tool use and structured prompting to constrain drift.\n - Memory and planning modules, hierarchical decoding, or sentence-level control.\n\n- Prompting tips:\n - Ask for concise answers, set token limits, and specify structure.\n - Provide concrete constraints or content to reduce generic filler.\n - Use “say nothing if uncertain” style instructions to avoid vacuity.\n\nRepresentative papers/terms to search:\n- Holtzman et al., “The Curious Case of Neural Text Degeneration” (2020): nucleus sampling.\n- Welleck et al., “Neural Text Degeneration with Unlikelihood Training.”\n- Li et al., “A Contrastive Framework for Decoding.”\n- Su et al., “DoLa: Decoding by Contrasting Layers.”\n- Meister et al., “Typical Decoding.”\n- Ouyang et al., “Training language models to follow instructions with human feedback.”\n\nIn short, degeneration arises from a mismatch between next-token likelihood and human preferences plus decoding choices; careful decoding, training objectives, and grounding help prevent it.",
517
+
"type": "output_text",
518
+
"logprobs": null
522
519
}
523
520
],
524
521
"role": "assistant",
@@ -531,32 +528,40 @@ curl -X POST "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?ap
531
528
"tool_choice": "auto",
532
529
"tools": [],
533
530
"top_p": 1.0,
531
+
"background": false,
534
532
"max_output_tokens": null,
533
+
"max_tool_calls": null,
535
534
"previous_response_id": null,
535
+
"prompt": null,
536
+
"prompt_cache_key": null,
536
537
"reasoning": {
537
-
"effort": "medium",
538
+
"effort": "minimal",
538
539
"generate_summary": null,
539
540
"summary": "detailed"
540
541
},
542
+
"safety_identifier": null,
543
+
"service_tier": "default",
541
544
"status": "completed",
542
545
"text": {
543
546
"format": {
544
547
"type": "text"
545
548
}
546
549
},
550
+
"top_logprobs": null,
547
551
"truncation": "disabled",
548
552
"usage": {
549
553
"input_tokens": 16,
550
-
"output_tokens": 974,
551
-
"output_tokens_details": {
552
-
"reasoning_tokens": 384
553
-
},
554
-
"total_tokens": 990,
555
554
"input_tokens_details": {
556
555
"cached_tokens": 0
557
-
}
556
+
},
557
+
"output_tokens": 657,
558
+
"output_tokens_details": {
559
+
"reasoning_tokens": 0
560
+
},
561
+
"total_tokens": 673
558
562
},
559
563
"user": null,
564
+
"content_filters": null,
560
565
"store": true
561
566
}
562
567
```
@@ -565,6 +570,152 @@ curl -X POST "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1/responses?ap
565
570
566
571
GPT-5 series reasoning models have the ability to call a new `custom_tool` called `lark_tool`. This tool is based on [Python lark](https://github.com/lark-parser/lark) and can be used for more flexible constraining of model output.
567
572
573
+
### Responses API
574
+
575
+
```json
576
+
{
577
+
"model": "gpt-5-2025-08-07",
578
+
"input": "please calculate the area of a circle with radius equal to the number of 'r's in strawberry",
By default the `o3-mini` and `o1` models will not attempt to produce output that includes markdown formatting. A common use case where this behavior is undesirable is when you want the model to output code contained within a markdown code block. When the model generates output without markdown formatting you lose features like syntax highlighting, and copyable code blocks in interactive playground experiences. To override this new default behavior and encourage markdown inclusion in model responses, add the string `Formatting re-enabled` to the beginning of your developer message.
0 commit comments