Replies: 3 comments 1 reply
-
Yes, we are currently using the model the user is chatting with to generate the title. I see how this can be an issue with thinking or more expensive models. Will add a field like "title_adapter" which takes the adapter name and model so that can use cheaper and faster models dor this. |
Beta Was this translation helpful? Give feedback.
-
You can now specify which adapter and model to use for title generation: history = {
---@type HistoryOpts
opts = {
auto_generate_title = true,
title_generation_opts = {
adapter = "copilot",
model = "gpt-4o",
},
},
} |
Beta Was this translation helpful? Give feedback.
-
I went to raise an issue about this because I too have just discovered the same thing. Alas, I spent ages debugging the Copilot adapter trying to find the issue 😆. @ravitemer - I do plan on adding a |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey ravitemer,
I noticed that if
auto_generate_title
is enabled, all messages received from the LLM after the very first one are no longer stream into the codecompanion chat, all subsequent messages appear only once the message has ended.Is this expected behavior?
I have also noticed that when using a reasoning model such as Qwen3, the
<think>[...]</think>
content is included as part of the title, making the title too long and not as useful.As I am using codecompanion exclusively with open weight local models, I went ahead and disabled this feature, but figured the feedback could be useful.
Thanks for the very useful extension!
Beta Was this translation helpful? Give feedback.
All reactions