Enabling “Toggle Artifacts UI” disrupts multi-turn conversations with local models (ollama) in LibreChat v0.7.7 #6356
-
What happened?In LibreChat v0.7.7, enabling the “Toggle Artifacts UI” beta feature disrupts multi-turn conversations with local models via Olama. After enabling this feature, follow-up queries to local models return irrelevant responses related to artifacts usage instead of continuing the prior conversation. Turning off the “Toggle Artifacts UI” resolves the issue. Version Information% docker images | grep librechat ghcr.io/danny-avila/librechat-dev latest b72b7f5962ca 5 days ago 1.47GB Steps to Reproduce
Issue_with artifacts and multi-turn conversations with local models.json What browsers are you seeing the problem on?Chrome Relevant log outputScreenshotsCode of Conduct
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Of course, using artifacts requires extensive instructions provided to the LLM, which will largely influence their responses and performance may degrade for smaller, local models. That’s why there’s a custom prompt mode to allow you to write your own instructions. You should now use this feature via agents instead of the UI toggle for better control and to only use artifacts when needed. |
Beta Was this translation helpful? Give feedback.
-
Thanks danny, good to know. |
Beta Was this translation helpful? Give feedback.
Of course, using artifacts requires extensive instructions provided to the LLM, which will largely influence their responses and performance may degrade for smaller, local models. That’s why there’s a custom prompt mode to allow you to write your own instructions. You should now use this feature via agents instead of the UI toggle for better control and to only use artifacts when needed.
https://www.librechat.ai/docs/features/agents