Skip to content

fix(llm): cap auto-detected max_output_tokens when it fills the entire context window #12436

fix(llm): cap auto-detected max_output_tokens when it fills the entire context window

fix(llm): cap auto-detected max_output_tokens when it fills the entire context window #12436

Triggered via pull request April 7, 2026 19:04
Status Success
Total duration 2m 56s
Artifacts 2

tests.yml

on: pull_request
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size Digest
coverage-cross
24.5 KB
sha256:ba7bfb7aa4b34ff5159c14019d1a41a0321a6aaaebbd50ed49b29be2de6051ec
coverage-sdk
20.5 KB
sha256:6a2bc5af03b0bfd57538087ba73c6ce48f854b03625797641cfa13563025e6d4