Skip to content

fix: per-user model list support with resolved headers#11849

Open
bensi94 wants to merge 3 commits intodanny-avila:mainfrom
aproorg:fix/per-user-model-list
Open

fix: per-user model list support with resolved headers#11849
bensi94 wants to merge 3 commits intodanny-avila:mainfrom
aproorg:fix/per-user-model-list

Conversation

@bensi94
Copy link

@bensi94 bensi94 commented Feb 18, 2026

Summary

When using an upstream model provider like LiteLLM with JWT/OIDC-based authentication, different users may have access to different models based on their role and permissions. The previous implementation cached the models config globally (MODELS_CONFIG key), causing all users to see the same model list regardless of their identity or authorization level.

This PR fixes two issues:

  1. Removes global model config caching in ModelController.js — models are now fetched fresh on each request, ensuring per-user model lists are correctly returned. This applies to both master-key and user-token scenarios, but is especially important when tokens vary per user.

  2. Resolves custom headers through resolveHeaders() in fetchModels() — template placeholders like {{LIBRECHAT_OPENID_ID_TOKEN}} in config headers are now properly expanded per-user before the model fetch request. Custom headers are merged after default auth headers, so config-level authorization headers (e.g. forwarding an OIDC token) take precedence over the default Bearer <apiKey>.

Example config that now works correctly:

headers:
  authorization: "Bearer {{LIBRECHAT_OPENID_ID_TOKEN}}"

Change Type

  • Bug fix (non-breaking change which fixes an issue)

Testing

  • Updated existing tests in models.spec.ts to verify:
    • Custom headers are resolved via resolveHeaders() before being sent
    • Custom authorization header overrides the default Bearer token
    • Anthropic endpoint merges custom headers on top of its defaults
  • Verified that ModelController.js no longer references CacheKeys or getLogStores

Test Configuration:

  • LibreChat with LiteLLM backend using OIDC JWT authentication
  • Config with headers.authorization: "Bearer {{LIBRECHAT_OPENID_ID_TOKEN}}"
  • Multiple users with different role-based model access

Checklist

  • My code adheres to this project's style guidelines
  • I have performed a self-review of my own code
  • I have commented in any complex areas of my code
  • My changes do not introduce new warnings
  • I have written tests demonstrating that my changes are effective or that my feature works
  • Local unit tests pass with my changes

… fetching

The models config was cached globally (MODELS_CONFIG key) which meant all
users saw the same model list regardless of their role or permissions.
This is incorrect when the upstream provider (e.g. LiteLLM) returns
different models per user based on JWT/OIDC tokens forwarded via custom
headers.

Changes:
- Remove MODELS_CONFIG cache from ModelController so models are fetched
  fresh on each request, supporting per-user model lists
- Resolve custom headers through resolveHeaders() before merging into
  the request options in fetchModels(), enabling template placeholders
  like {{LIBRECHAT_OPENID_ID_TOKEN}} to be expanded per-user
- Merge resolved custom headers after default auth headers so config
  headers (e.g. authorization) take precedence over the default Bearer
  token
- Update tests to verify header resolution and override behavior
@busla
Copy link
Contributor

busla commented Feb 18, 2026

Nice!

Exactly what I need for claim-based access in my LiteLLM instance!

🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments