How to add LLM connection, as I am using LLM models from litellm. #11802
Replies: 2 comments
-
|
Based on your screenshot and the common issues with LiteLLM connections, here are the likely problems: API Base URL Configuration Your API Base URL should end with Not: Network Accessibility Issues If you're running LiteLLM and Langfuse in Docker or Kubernetes, using
OpenAI API Schema Compliance Langfuse expects the backend to fully implement the OpenAI API schema(1). Your LiteLLM Proxy must: Message Role Validation Langfuse sends a test payload with a "system" message followed by a "user" message during connection setup(1). Some OpenAI-compatible backends, including certain LiteLLM setups, require strict alternation between "user" and "assistant" roles and may reject this sequence(1). Recommended Configuration Based on your setup, try:
If you're still experiencing issues, you can also use the API to manage LLM connections programmatically via 📚 Sources:
Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @avalsonigara15, I think you need to make sure to use the OpenAI API compatible LiteLLM endpoint to use the models as LLM connections: https://docs.litellm.ai/docs/providers/openai |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe your question
How can I connect my litellm models in the LLM connection section, as in attempt to doing so I am having connection error and unable to figure out what is the error about.

This is how added my llm connection in the UI. please let me know if I have configured it wrong.
Langfuse Cloud or Self-Hosted?
Langfuse Cloud
If Self-Hosted
No response
If Langfuse Cloud
https://cloud.langfuse.com/project/cmkzdd3bm0438ad07hp998w1n/settings/llm-connections
SDK and integration versions
No response
Pre-Submission Checklist
Beta Was this translation helpful? Give feedback.
All reactions