Skip to content

Commit 0afc4be

Browse files
committed
docs(update-notes): format <think> reasoning tags for consistency in OpenAI-compatible providers
1 parent 76cbfd1 commit 0afc4be

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/update-notes/v3.30.0.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ OpenRouter currently supports 7 embedding models, including the top‑ranking Qw
4040
## Provider Updates
4141

4242
* Chutes: dynamic/router provider so new models appear automatically; safer error logging and temperature applied only when supported ([#8980](https://github.com/RooCodeInc/Roo-Code/pull/8980))
43-
* OpenAI‑compatible providers: handle <think> reasoning tags in streaming for consistent reasoning chunk handling ([#8989](https://github.com/RooCodeInc/Roo-Code/pull/8989))
43+
* OpenAI‑compatible providers: handle `<think>` reasoning tags in streaming for consistent reasoning chunk handling ([#8989](https://github.com/RooCodeInc/Roo-Code/pull/8989))
4444
* GLM 4.6: capture reasoning content in base OpenAI‑compatible provider during streaming ([#8976](https://github.com/RooCodeInc/Roo-Code/pull/8976))
4545
* Fireworks: add GLM‑4.6 to the model dropdown for stronger coding performance and longer context (thanks mmealman!) ([#8754](https://github.com/RooCodeInc/Roo-Code/pull/8754))
4646
* Fireworks: add MiniMax M2 with 204.8K context and 4K output tokens; correct pricing metadata (thanks dmarkey!) ([#8962](https://github.com/RooCodeInc/Roo-Code/pull/8962))

docs/update-notes/v3.30.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ OpenRouter currently supports 7 embedding models, including the top‑ranking Qw
4040
## Provider Updates
4141

4242
* Chutes: dynamic/router provider so new models appear automatically; safer error logging and temperature applied only when supported ([#8980](https://github.com/RooCodeInc/Roo-Code/pull/8980))
43-
* OpenAI‑compatible providers: handle <think> reasoning tags in streaming for consistent reasoning chunk handling ([#8989](https://github.com/RooCodeInc/Roo-Code/pull/8989))
43+
* OpenAI‑compatible providers: handle `<think>` reasoning tags in streaming for consistent reasoning chunk handling ([#8989](https://github.com/RooCodeInc/Roo-Code/pull/8989))
4444
* GLM 4.6: capture reasoning content in base OpenAI‑compatible provider during streaming ([#8976](https://github.com/RooCodeInc/Roo-Code/pull/8976))
4545
* Fireworks: add GLM‑4.6 to the model dropdown for stronger coding performance and longer context (thanks mmealman!) ([#8754](https://github.com/RooCodeInc/Roo-Code/pull/8754))
4646
* Fireworks: add MiniMax M2 with 204.8K context and 4K output tokens; correct pricing metadata (thanks dmarkey!) ([#8962](https://github.com/RooCodeInc/Roo-Code/pull/8962))

0 commit comments

Comments
 (0)