Auto Titling for xAI (Grok) and Google (Gemini) Models #8816
-
Auto Titling Not Working for xAI (Grok) and Google (Gemini) ModelsIssue SummaryAuto titling functionality does not work for xAI (Grok) and Google (Gemini) models, while working correctly for Anthropic and OpenAI models. Reported by: User feedback from production environment Environment
Expected BehaviorAuto titling should work consistently across all AI providers when Actual Behavior
Steps to Reproduce
Root Cause AnalysisxAI Models Issue
Google Models Issue
Solution/WorkaroundFor xAI Models:custom:
- name: "xAI"
apiKey: "${XAI_API_KEY}"
baseURL: "https://api.x.ai/v1"
models:
default: ["grok-2-1212", "grok-4-0709"]
titleConvo: true
titleModel: "grok-2-1212" # Use stable model for titles
dropParams: ["presencePenalty", "frequencyPenalty"] # Key fix For Google Models:custom:
- name: "Google"
apiKey: "${GOOGLE_KEY}"
baseURL: "https://generativelanguage.googleapis.com/v1beta"
models:
default: ["gemini-2.5-flash", "gemini-2.5-pro"]
titleConvo: true
titleModel: "gemini-2.5-flash" And disable standard Google endpoint: ENDPOINTS=openAI,assistants,azureOpenAI,gptPlugins,anthropic Configuration Detailslibrechat.yaml changes:
.env changes:
Testing ResultsAfter implementing the workarounds:
Suggested Improvements for LibreChat Core
Temporary Workarounds ImplementedThe following workarounds resolve the issues but require custom endpoint configuration:
Additional ContextModel-Specific Notes:
Impact: This affects user experience as conversations with xAI and Google models lack descriptive titles, making conversation management difficult. Files Involved
PriorityMedium - Affects user experience but has working workarounds Labels: |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Please look into all the title configurations to ensure this works for you (from
Also, make sure you are indeed using the latest version of LC: I don't have any issues with the following config: endpoints:
google:
titleModel: "gemini-2.5-flash"
custom:
- name: "xai"
apiKey: "${XAI_API_KEY}"
baseURL: "https://api.x.ai/v1"
models:
default: ["grok-3-latest"]
fetch: true
titleConvo: true
titleMethod: "completion"
titleModel: "grok-3-latest"
modelDisplayLabel: "Grok" ![]() |
Beta Was this translation helpful? Give feedback.
-
✅ Confirmed Working SolutionThanks @danny-avila for the quick response and working configuration! Update:I can confirm that your recommended solution is working perfectly. After implementing the suggested configuration: endpoints:
custom:
- name: "xAI"
apiKey: "${XAI_API_KEY}"
baseURL: "https://api.x.ai/v1"
models:
default: ["grok-3-latest"]
fetch: false
titleConvo: true
titleMethod: "completion"
titleModel: "grok-3-latest"
modelDisplayLabel: "xAI"
dropParams: ["presencePenalty", "frequencyPenalty"] Results:
Additional Notes:
Environment Context:Running on production environment with the configuration working reliably across multiple conversations. Thanks again for the fast turnaround and clear documentation! The solution works exactly as described. 🚀 |
Beta Was this translation helpful? Give feedback.
Please look into all the title configurations to ensure this works for you (from
titleConvo
and immediately below):Also, make sure you are indeed using the latest version of LC:
I don't have any issues with the following config: