Skip to content
This repository was archived by the owner on Jul 22, 2025. It is now read-only.

Commit cf86d27

Browse files
authored
FEATURE: improve o3-mini support (#1106)
* DEV: raise timeout for reasoning LLMs * FIX: use id to identify llms, not model_name model_name is not unique, in the case of reasoning models you may configure the same llm multiple times using different reasoning levels.
1 parent 381a271 commit cf86d27

File tree

2 files changed

+7
-3
lines changed

2 files changed

+7
-3
lines changed

assets/javascripts/discourse/connectors/composer-fields/persona-llm-selector.gjs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ export default class BotSelector extends Component {
152152
resetTargetRecipients() {
153153
if (this.allowLLMSelector) {
154154
const botUsername = this.currentUser.ai_enabled_chat_bots.find(
155-
(bot) => bot.model_name === this.llm
155+
(bot) => bot.id === this.llm
156156
).username;
157157
this.composer.set("targetRecipients", botUsername);
158158
} else {
@@ -170,7 +170,7 @@ export default class BotSelector extends Component {
170170

171171
return availableBots.map((bot) => {
172172
return {
173-
id: bot.model_name,
173+
id: bot.id,
174174
name: bot.display_name,
175175
};
176176
});

lib/completions/endpoints/base.rb

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,11 @@ class Base
77
attr_reader :partial_tool_calls
88

99
CompletionFailed = Class.new(StandardError)
10-
TIMEOUT = 60
10+
# 6 minutes
11+
# Reasoning LLMs can take a very long time to respond, generally it will be under 5 minutes
12+
# The alternative is to have per LLM timeouts but that would make it extra confusing for people
13+
# configuring. Let's try this simple solution first.
14+
TIMEOUT = 360
1115

1216
class << self
1317
def endpoint_for(provider_name)

0 commit comments

Comments
 (0)