-
Notifications
You must be signed in to change notification settings - Fork 12.7k
gpt-oss: implement harmony parsing #15181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Thanks. It finally made it much easier to use tools in Cherry Studio. And it generates thinking boxes properly. |
With the PR: It's better, easily more usable, but there might be some issues around tool calling still. |
@dagbs try setting function calling to |
d65e556
to
981886f
Compare
I tried this PR yesterday and compared it to #15158 (+ my own fixes on top of that PR) and there was a couple of issues with this PR (that I was gonna share this morning), but since da67163 was pushed, it seems to finally work better than that PR. In my (albeit limited) testing, seems tool calling and it's formatting is working a lot better. Thanks a ton for this patch @aldehir! All the unit tests pass as well, compared to the other PR, and code organization at a glance seems better too, but granted I'm not cpp expert, just an generalist. |
Hmm, seems to still be breaking sometimes, tried to understand why but to no avail. Most of the time, it works perfectly fine, but seems some edge-case breaks it. Running da67163 right now. If I repeatably use the same weather example maybe 10 times, I end up getting a badly parsed (on llama.cpp's side) maybe once. Good run looks like this: ChatCompletionResponse {
choices: [
Choice {
message: ResponseMessage {
content: Some(
"Here are the current conditions for the three cities, sorted by temperature (highest\u{202f}→\u{202f}lowest):\n\n- **Barcelona**: ☀\u{fe0f}\u{202f}+25\u{202f}°C \n- **Lima**: ⛅\u{fe0f}\u{202f}+16\u{202f}°C \n- **Stockholm**: ☀\u{fe0f}\u{202f}+13\u{202f}°C \n\n*(Temperatures are taken from the latest weather data at the time of the query.)*",
),
reasoning_content: Some(
"The user asks: \"What is the current weather in Barcelona, Stockholm, and Lima? And also, display them in a list sorted by their temperatures, highest first.\"\n\nWe have fetched weather for each location via the get_weather function. The function returns a JSON string with \"result\": \"Barcelona: ☀\u{fe0f} +25°C\\n\". Similarly for Stockholm: \"Stockholm: ☀\u{fe0f} +13°C\\n\". Lima: \"Lima: ⛅\u{fe0f} +16°C\\n\". We need to parse these results, extract the temperature values, sort them descending, and display them in a list.\n\nWe need to produce a final answer that includes the weather for each location sorted by temperature highest first. The user wants a list sorted by temperature, highest first. So we need to sort: Barcelona +25°C, Lima +16°C, Stockholm +13°C.\n\nThus the sorted list: Barcelona: ☀\u{fe0f} +25°C, Lima: ⛅\u{fe0f} +16°C, Stockholm: ☀\u{fe0f} +13°C.\n\nWe should present them as a list, maybe bullet points.\n\nWe need to ensure we include the weather icons and temperature values as given.\n\nThus answer: \n\n- Barcelona: ☀\u{fe0f} +25°C\n- Lima: ⛅\u{fe0f} +16°C\n- Stockholm: ☀\u{fe0f} +13°C\n\nWe could also include the original strings.\n\nThus final answer: a list sorted by temperature highest first.\n\nWe should also note that the data is from the function calls.\n\nThus answer: \"Here are the current weather conditions for the three cities, sorted by temperature (highest first): ...\"\n\nWe should also mention that the temperatures are approximate and may change.\n\nThus final answer.",
),
tool_calls: [],
},
},
],
}
Meanwhile, a bad runs ends up with: ChatCompletionResponse {
choices: [
Choice {
message: ResponseMessage {
content: Some(
" to=functions.get_weather\u{a0}\u{200b}\u{200b}\u{a0}\u{a0}\n\n\n\n",
),
reasoning_content: None,
tool_calls: [],
},
},
],
} Full logs from bad run:
Seems to happen more often when |
@victorb maybe use temperature= 0 and/or top-k 1? If inference is the issue, making it deterministic would fix it. |
Running with these inference parameters for example: {
temperature: 0.0,
top_p: 1.0,
min_p: 0.0,
top_k: 0,
samplers: [
"top_k",
"top_p",
"min_p",
"temperature",
],
} Seems to correctly give me deterministic responses, which once I get one good response, they always work well, but the ones that break, always break, so I guess useful for testing at the very least. Here's one example of broken parsing I'm currently getting, even with ChatCompletionResponse {
choices: [
Choice {
message: ResponseMessage {
content: Some(
" to=function\u{a0}\u{a0}...",
),
reasoning_content: None,
tool_calls: [],
},
},
],
}
Tried setting |
@victorb thank you for that extensive testing. I can't seem to reproduce this on
That will help me better understand the problem. It appears the model is emitting unicode space characters, but I wasn't aware the |
I managed to get Looks like I missed a scenario where the model outputs the recipient (
I have yet to see the I updated the parsing and grammar rule to handle this. It should at least parse the tool calls now. I found performance degrades by the third call. I get queries to "Lima??", "Lima?", or some variation with garbage at the end. However, if I pass Give cf9a0d6 a shot. |
For those interested, I implemented a basic cache for reasoning content in my fork aldehir#1. Without prior reasoning content for tool calls, |
Awesome @aldehir, did a bunch of testing yesterday with 20b and 120b and tool parsing didn't fail once! 🎉 I do see the same inference quality degradation after a few messages, mainly hallucinations for the tool arguments (calling get_weather("...") or get_weather("?") for example) with both 20b and 120b. However, trying out the Overall, seems solid to me now. Since cf9a0d6, the parsing of Harmony seems complete in all the examples I've tried to run, everything goes into the right place and tool calls/responses all look correct now. |
@aldehir using your
If If I wonder if this is related to the grammar generation for the tool calls which is somehow constraining it to always use the first tool. BTW this is the first model I've tried with llama-server that can mix reasoning with tool calls, so it is definitely in the right direction! |
@tarruda good catch. I forgot to group up the tool calls when I reworked the grammar to account for the recipient in the role. I've updated both this PR and the one in my fork. |
Thanks a lot, seems to be working perfectly now! |
I've also been playing with calling tools in its CoT and confirm it is working correctly. For example, if I provide this tool to the LLM: async def arithmetic(code: str) -> str:
"""
Evaluates arithmetic expression and returns the result.
ANY arithmetic questions (no matter how trivial) should make use of this tool in your chain of thought. Always return this tool's response even if it is wrong!
"""
return f"{eval("5 + 5")}" Then it will always use it during reasoning. There's something I'm wondering though: Looking at the template, I can see it tells the LLM about 2 possible builtin tools it can use in its CoT ( |
@tarruda those tools cause the model to produce different type constraints other than json. I believe the Python one produces
So I think they need their own grammar rules. From what I can tell, it seems those tools are intended to be resolved internally and not sent back to the user. For example, the Python one mentions a I suppose it could process the builtins, generate tool calls, and any interested parties can implement middleware to intercept the calls. |
@aldehir I am experimenting with your Does this mean I need to ditch the GGUF embedded template and use |
@chaserhkj that's because open-webui injects the reasoning itself into the content. I added a fix in my fork to address that. But for open-webui, this PR should be enough. I don't think the reasoning cache has gotten enough use for me to recommend it, I simply wanted to show that the model performs better when you pass along its reasoning in tool calls. If you'd like to keep using it, feel free to resume the conversation there. Like I said, for open-webui it shouldn't be necessary. |
Yes we are expected to drop analysis channel when the last channel ends with I am trying out gpt-oss 120b right now and lack of harmony response format parsing is one of the biggest obstacle to using this model. |
Hi everyone, are there any Docker image releases that support this feature? |
@createthis one thing to consider is that this model was most likely trained to use the tools in https://github.com/openai/codex. The diff looks awfully similar to their I'm glad it helps. |
@aldehir yup. That's certainly it: https://github.com/openai/codex/blob/5f8984aa7d550955eb5f894d5c29adc2b9901da2/codex-rs/apply-patch/apply_patch_tool_instructions.md Cool, I may make an adapter on my end. It successfully completed a moderately difficult agentic programming task. Here's the startup command I used: ./build/bin/llama-server \
--model /data/gpt-oss-120b-GGUF/ggml-org/gpt-oss-120b-mxfp4-00001-of-00003.gguf \
--alias gpt-oss-120b-mxfp4 \
--no-webui \
--numa numactl \
--threads 32 \
--ctx-size 131072 \
--n-gpu-layers 37 \
-ot "exps.*\.blk.*\.ffn_.*=CUDA0" \
--no-op-offload \
-ub 4096 -b 4096 \
--seed 3407 \
--temp 0.6 \
--top-p 1.0 \
--log-colors \
--flash-attn \
--host 0.0.0.0 \
--jinja \
--chat-template-kwargs '{"reasoning_effort": "high"}' \
--reasoning-format none \
--port 11434 Performance is excellent. I look forward to using this more in the future. |
6343a7f
to
04e1626
Compare
Reverted thinking tags, and set the tokens back to user-defined. ref: #15230 (comment) |
is this pr is done ? update |
I am comparing the chat template from this branch with the current latest version in https://huggingface.co/openai/gpt-oss-20b/blob/main/chat_template.jinja. Should we update it to match? diffdiff models/templates/openai-gpt-oss-120b.jinja ~/Downloads/chat_template.txt
87,88c87
< {{- "{
< " }}
---
> {{- "{\n" }}
110,115c109,110
< {{- "## " + namespace_name + "
<
< " }}
< {{- "namespace " + namespace_name + " {
<
< " }}
---
> {{- "## " + namespace_name + "\n\n" }}
> {{- "namespace " + namespace_name + " {\n\n" }}
118,119c113
< {{- "// " + tool.description + "
< " }}
---
> {{- "// " + tool.description + "\n" }}
122,123c116
< {{- "(_: {
< " }}
---
> {{- "(_: {\n" }}
126,127c119
< {{- "// " + param_spec.description + "
< " }}
---
> {{- "// " + param_spec.description + "\n" }}
145,146c137
< {{- ",
< " }}
---
> {{- ",\n" }}
148,149c139
< {{- "
< " }}
---
> {{- ",\n" }}
152,154c142
< {{- "}) => any;
<
< " }}
---
> {{- "}) => any;\n\n" }}
156,158c144
< {{- "() => any;
<
< " }}
---
> {{- "() => any;\n\n" }}
166,239c152,185
< {{- "## browser
<
< " }}
< {{- "// Tool for browsing.
< " }}
< {{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.
< " }}
< {{- "// Cite information from the tool using the following format:
< " }}
< {{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.
< " }}
< {{- "// Do not quote more than 10 words directly from the tool output.
< " }}
< {{- "// sources=web (default: web)
< " }}
< {{- "namespace browser {
<
< " }}
< {{- "// Searches for information related to `query` and displays `topn` results.
< " }}
< {{- "type search = (_: {
< " }}
< {{- "query: string,
< " }}
< {{- "topn?: number, // default: 10
< " }}
< {{- "source?: string,
< " }}
< {{- "}) => any;
<
< " }}
< {{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.
< " }}
< {{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.
< " }}
< {{- "// If `cursor` is not provided, the most recent page is implied.
< " }}
< {{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.
< " }}
< {{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.
< " }}
< {{- "// Use this function without `id` to scroll to a new location of an opened page.
< " }}
< {{- "type open = (_: {
< " }}
< {{- "id?: number | string, // default: -1
< " }}
< {{- "cursor?: number, // default: -1
< " }}
< {{- "loc?: number, // default: -1
< " }}
< {{- "num_lines?: number, // default: -1
< " }}
< {{- "view_source?: boolean, // default: false
< " }}
< {{- "source?: string,
< " }}
< {{- "}) => any;
<
< " }}
< {{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.
< " }}
< {{- "type find = (_: {
< " }}
< {{- "pattern: string,
< " }}
< {{- "cursor?: number, // default: -1
< " }}
< {{- "}) => any;
<
< " }}
< {{- "} // namespace browser
<
< " }}
---
> {{- "## browser\n\n" }}
> {{- "// Tool for browsing.\n" }}
> {{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
> {{- "// Cite information from the tool using the following format:\n" }}
> {{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
> {{- "// Do not quote more than 10 words directly from the tool output.\n" }}
> {{- "// sources=web (default: web)\n" }}
> {{- "namespace browser {\n\n" }}
> {{- "// Searches for information related to `query` and displays `topn` results.\n" }}
> {{- "type search = (_: {\n" }}
> {{- "query: string,\n" }}
> {{- "topn?: number, // default: 10\n" }}
> {{- "source?: string,\n" }}
> {{- "}) => any;\n\n" }}
> {{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
> {{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
> {{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
> {{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
> {{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
> {{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
> {{- "type open = (_: {\n" }}
> {{- "id?: number | string, // default: -1\n" }}
> {{- "cursor?: number, // default: -1\n" }}
> {{- "loc?: number, // default: -1\n" }}
> {{- "num_lines?: number, // default: -1\n" }}
> {{- "view_source?: boolean, // default: false\n" }}
> {{- "source?: string,\n" }}
> {{- "}) => any;\n\n" }}
> {{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
> {{- "type find = (_: {\n" }}
> {{- "pattern: string,\n" }}
> {{- "cursor?: number, // default: -1\n" }}
> {{- "}) => any;\n\n" }}
> {{- "} // namespace browser\n\n" }}
243,251c189,191
< {{- "## python
<
< " }}
< {{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).
<
< " }}
< {{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.
<
< " }}
---
> {{- "## python\n\n" }}
> {{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
> {{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
260,266c200,202
< {{- model_identity + "
< " }}
< {{- "Knowledge cutoff: 2024-06
< " }}
< {{- "Current date: " + strftime_now("%Y-%m-%d") + "
<
< " }}
---
> {{- model_identity + "\n" }}
> {{- "Knowledge cutoff: 2024-06\n" }}
> {{- "Current date: " + strftime_now("%Y-%m-%d") + "\n\n" }}
270,272c206
< {{- "Reasoning: " + reasoning_effort + "
<
< " }}
---
> {{- "Reasoning: " + reasoning_effort + "\n\n" }}
274,276c208
< {{- "# Tools
<
< " }}
---
> {{- "# Tools\n\n" }}
289,290c221
< {{- "
< Calls to these tools must go to the commentary channel: 'functions'." }}
---
> {{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
315,317c246
< {{- "# Instructions
<
< " }}
---
> {{- "# Instructions\n\n" }}
318a248
> {{- "\n\n" }}
321,326c251
< {{- "
<
< " }}
< {{- "# Tools
<
< " }}
---
> {{- "# Tools\n\n" }}
348a274,282
> {#- We need very careful handling here - we want to drop the tool call analysis message if the model #}
> {#- has output a later <|final|> message, but otherwise we want to retain it. This is the only case #}
> {#- when we render CoT/analysis messages in inference. #}
> {%- set future_final_message = namespace(found=false) %}
> {%- for future_message in loop_messages[loop.index:] %}
> {%- if future_message.role == 'assistant' and "tool_calls" not in future_message %}
> {%- set future_final_message.found = true %}
> {%- endif %}
> {%- endfor %}
357c291
< {%- elif message.content %}
---
> {%- elif message.content and not future_final_message.found %}
359c293
< {%- elif message.thinking %}
---
> {%- elif message.thinking and not future_final_message.found %} |
So after the discussion in #15082 (comment) I realize that something is still not OK. According to OpenAI, we should drop reasoning tokens in follow-up requests: https://cookbook.openai.com/articles/openai-harmony#handling-reasoning-output-in-subsequent-sampling However, a simple tests demonstrate this is not currently happening with the WebUI: ![]() The assistant remembers what it thought about. Looking at the logs, it looks like the prompt that we create for the second request is both malformed and includes the thinking tokens of the previous answer:
@aldehir @ngxson Any suggestions? Edit: apart from including the thinking tokens, this section appears to be malformed and causes extra unnecessary reprocessing of the context:
|
I notice since commit 6343a7 yesterday, I see these harmony tags in the output now:
Agentic tasks are still succeeding, it's just ugly. |
On external frontends (Cherry-studio connected to OAI API of llama.cpp), with the official OAI jinja (slight modification to system prompt), I get normal behavior.
It could be an issue with the llama.cpp server webui's option "Exclude thought process when sending requests to API (Recommended for DeepSeek-R1)". |
I can take a look later today. I wasn't aware of that option in the webui, so I never tested it. This does work as expected when @createthis I found the think tags to cause some issues, so I reverted that and it was recommended in light of the new webui. I believe the path forward, someone can correct me if I'm wrong, is that the clients are responsible for sending |
The option by default is enabled. Generally, reasoning models require clients to drop the thinking tokens from previous messages. This is also the case with In some cases it is useful to support the option to not drop the tokens from the context - this makes the prompt "continuous" and more friendly for reusing the cache. But the more important thing for now is to support the default case of dropping the thinking tokens. |
This comment has been minimized.
This comment has been minimized.
This comment was marked as off-topic.
This comment was marked as off-topic.
In case this is useful, diff --git a/tools/server/webui/src/utils/misc.ts b/tools/server/webui/src/utils/misc.ts
index d60a68cd2..564d63354 100644
--- a/tools/server/webui/src/utils/misc.ts
+++ b/tools/server/webui/src/utils/misc.ts
@@ -118,20 +118,59 @@ export function normalizeMsgsForAPI(messages: Readonly<Message[]>) {
/**
* recommended for DeepsSeek-R1, filter out content between <think> and </think> tags
*/
-export function filterThoughtFromMsgs(messages: APIMessage[]) {
+// -------------------------------------------------------------
+// Helper – removes every thought block, regardless of format
+// -------------------------------------------------------------
+/**
+ * Strip all “thought” sections from a message string.
+ *
+ * Supported formats:
+ * <think> … </think>
+ * <|channel|>analysis<|message|> … <|end|>
+ *
+ * If the input is `null` the function returns `null` unchanged.
+ */
+function stripThoughts(content: string | null): string | null {
+ if (content === null) return null;
+
+ // Opening tags: <think> OR <|channel|>analysis<|message|>
+ const OPEN = /<think>|<\|channel\|>analysis<\|message\|>/g;
+
+ // Closing tags: </think> OR <|end|>
+ const CLOSE = /<\/think>|<\|end\|>/g;
+
+ // Build a single regex that matches an opening tag, anything (lazy),
+ // then a closing tag.
+ const THOUGHT_BLOCK = new RegExp(
+ `(?:${OPEN.source})[\\s\\S]*?(?:${CLOSE.source})`,
+ 'g'
+ );
+
+ // Remove every thought block and trim the result.
+ return content.replace(THOUGHT_BLOCK, '').trim();
+}
+
+// -------------------------------------------------------------
+// Public utility – filter thought from an array of messages
+// -------------------------------------------------------------
+export function filterThoughtFromMsgs(messages: APIMessage[]): APIMessage[] {
console.debug({ messages });
+
return messages.map((msg) => {
+ // Non‑assistant messages never contain thoughts, return them untouched.
if (msg.role !== 'assistant') {
return msg;
}
- // assistant message is always a string
- const contentStr = msg.content as string;
+
+ // `msg.content` is guaranteed to be a string for assistants,
+ // but we stay defensive and accept `null` as well.
+ const originalContent = msg.content as string | null;
+ const cleanedContent = stripThoughts(originalContent);
+
+ // Preserve every other field (name, function_call, …) unchanged.
return {
- role: msg.role,
- content:
- msg.role === 'assistant'
- ? contentStr.split('</think>').at(-1)!.trim()
- : contentStr,
+ ...msg,
+ content: cleanedContent,
} as APIMessage;
});
} Feel free to ignore - I don't usually write Typescript so don't know if this makes sense. p.s. seeing glimpses of |
@ggerganov I've done this in my client as well. Basically, the models are trained to not have reasoning content passed to them in the message history. But if you put the reasoning content in the "content" field in Back in the days, when reasoning_content was not supported, models had Jinja templates that took care of this by actually stripping the thinking parts (I know the Qwen jinja template had that). But OSS doesn't seem to have that (or maybe it's not working correctly due to some quirks). (actually, I went and checked and I think I know what's going on: OSS doesn't clear that up, but instead throws this exception: {{- raise_exception("You have passed a message containing <|channel|> tags in the content field. Instead of doing this, you should pass analysis messages (the string between '<|message|>' and '<|end|>') in the 'thinking' field, and final messages (the string between '<|message|>' and '<|end|>') in the 'content' field.") }} but you hotfixed the exception out: fba5c0d which is why this behavior is happening. |
This is my attempt at implementing a harmony parser for gpt-oss.
Implementation
auto
andnone
are supported. Whennone
,<|channel|>analysis<|message|>{reasoning content}<|end|>
is added to the content.parse_tool_calls == false
, tool calls are added to the content verbatim--which aligns with other implementations.Remaining Work
reasoning_content
. However, none of the clients I tested send it. A simple workaround is to usereasoning_format = none
, or add the reasoning to the content in tool calls.