It would be great to support any OpenAI-compatible API as a backend.
- This would let users plug in alternative model providers such as ChatGPT, Ollama, or any custom deployment that follows the OpenAI API spec.
- Users should also be able to switch the LLM used for chat, rather than being restricted to Llama.
Reference: https://docs.ollama.com/api/openai-compatibility