Open-Webui Migration #7021
-
Hi,
Not interested in migrating data, but any feedback from a user experience would be appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Thanks for your questions, happy to answer!
Absolutely, this is in the documentation (though we need to make it more apparent): https://www.librechat.ai/docs/features/code_interpreter#in-librechat a. Per-user setup: input your API key in LibreChat when prompted (using the above methods)
Something great here is planned in the short term, stay tuned, not just limited to agents.
Unfortunately, no, Prompt Caching is not a toggle as custom endpoints are mainly "OpenAI-compliant" and not "Anthropic", but something here is due implementing for other reasons as well.
Not yet but is planned.
Not yet, mainly because realtime audio is not widely supported in a multi-user context yet, and solutions combining STT/TTS can be quite cumbersome and prone to latency issues. |
Beta Was this translation helpful? Give feedback.
-
Thank-you, appreciate the quick response. Despite skimming and searching the docs, completely missed the LIBRECHAT_CODE_API_KEY, guess I was searching for INTERPRETER or something. From a user perspective, the interface is extremely clean, well-done to the team. |
Beta Was this translation helpful? Give feedback.
Thanks for your questions, happy to answer!
Absolutely, this is in the documentation (though we need to make it more apparent):
https://www.librechat.ai/docs/features/code_interpreter#in-librechat
a. Per-user setup: input your API key in LibreChat when prompted (using the above methods)
b. Global setup: use LIBRECHAT_CODE_API_KEY environment variable in the .env file of your project (provides access to all users)
Something great here is planned in the short term, stay tuned, not just limited t…