Skip to content

Commit 6151888

Browse files
authored
fix: stabilize thinking streams, multimodal parsing, and token accounting (#20)
* fix: stabilize multimodal image compatibility across OpenCode flows Advertise vision-capable metadata in /v1/models and make model matching deterministic so OpenCode does not downgrade image support or route 4.6 models incorrectly. Expand request translation to accept OpenCode/OpenAI attachment shapes, sanitize [Image N] placeholders safely, keep image-only follow-up turns non-empty, and improve token accounting so base64 image bytes no longer inflate prompt token usage and trigger premature compaction. * fix: deduplicate thinking streams and trim injected prompt noise * fix: align /v1/messages thinking blocks and message_start usage * fix: reduce repetitive thinking across tool turns Select a single reasoning stream source, prevent chunk replay, and preserve structured tool-loop context so the model keeps continuity instead of re-planning each turn. * fix: unify token counting on existing API endpoints Compute usage deterministically on /v1/messages and /v1/chat/completions even when upstream omits tokenUsage. - remove roo-only token path and keep behavior on existing endpoints - add proxy/token_estimator.go with shared Claude/OpenAI estimators (input/system/messages/tools + output/thinking/tool calls) - wire stream/non-stream handlers to use estimator-derived input/output usage - update /v1/messages/count_tokens to reuse the same estimator - keep robust upstream usage parsing/normalization in proxy/kiro.go while dropping parser-level estimate fallback Why: direct upstream tests show metering/context events frequently arrive without tokenUsage in this environment; this made usage zero or inconsistent. Local deterministic accounting keeps reported usage stable and explicit.
1 parent f404994 commit 6151888

File tree

7 files changed

+1413
-326
lines changed

7 files changed

+1413
-326
lines changed

0 commit comments

Comments
 (0)