Azure config help #9009
-
Beta Was this translation helpful? Give feedback.
Replies: 7 comments 12 replies
-
For azure, see this detailed configuration guide: https://www.librechat.ai/docs/configuration/azure Make sure ENDPOINTS is configured to include azure as well, I.e.
Thanks for the feedback. By nature, azure setup is not simple. The azure config was designed to allow for multiple deployments/regions to be used via the endpoint, especially before azure themselves started consolidating their endpoints into a single simple setup via the more “OpenAI-like” azure ai OpenAI endpoint. I will share some example configs that work and are pretty simple in a few minutes. |
Beta Was this translation helpful? Give feedback.
-
For example, in your Then add this (correspond to your azure deployment endpoints:
azureOpenAI:
titleConvo: true
groups:
- group: "region-eastus-this-just-needs-to-be-unique" # arbitrary name
apiKey: "${EASTUS_API_KEY}" # this maps to `EASTUS_API_KEY` set in .env file
instanceName: "azure-eastus" # must match your actual region
version: "2024-12-01-preview"
# assuming all these models are deployed, they should work
models:
gpt-4.1:
deploymentName: "gpt-4.1"
gpt-4.1-mini:
deploymentName: "gpt-4.1-mini-eus2" # note, deployment name can differ from the openai model ID
gpt-4.1-nano:
deploymentName: "gpt-4.1-nano"
gpt-5:
deploymentName: "gpt-5"
gpt-5-mini:
deploymentName: "gpt-5-mini"
gpt-5-nano:
deploymentName: "gpt-5-nano"
gpt-5-chat:
deploymentName: "gpt-5-chat" When visiting https://oai.azure.com, the ![]() Clicking "Deployments" on the left-hand side, you can see my corresponding deployments, where the 1st "name" column is ![]() |
Beta Was this translation helpful? Give feedback.
-
Followed this closely and looks like I have everything setup accordingly. Thanks for the quick responses in this thread! Still hitting connection errors when trying to get a response. Could it be the api version? I'm using Edit: tried with # For more information, see the Configuration Guide:
# https://www.librechat.ai/docs/configuration/librechat_yaml
# Configuration version (required)
version: 1.2.1
# Cache settings: Set to true to enable caching
cache: true
# File strategy s3/firebase
# fileStrategy: "s3"
# Custom interface configuration
interface:
customWelcome: "Welcome to LibreChat! Enjoy your experience."
# MCP Servers UI configuration
fileSearch: true
endpointsMenu: true
modelSelect: true
parameters: true
sidePanel: true
presets: true
prompts: true
bookmarks: true
multiConvo: true
agents: true
temporaryChatRetention: 8
registration:
socialLogins: ['github', 'saml']
allowedDomains:
- "coterra.com"
# Example MCP Servers Object Structure
mcpServers:
placeholder: 'MCP Servers'
# Enable/disable file search as a chatarea selection (default: true)
# Note: This setting does not disable the Agents File Search Capability.
# To disable the Agents Capability, see the Agents Endpoint configuration instead.
# everything:
# # type: sse # type can optionally be omitted
# url: http://localhost:3001/sse
# timeout: 60000 # 1 minute timeout for this server, this is the default timeout for MCP servers.
# Definition of custom endpoints
# assistants:
# disableBuilder: false # Disable Assistants Builder Interface by setting to `true`
# pollIntervalMs: 3000 # Polling interval for checking assistant updates
# timeoutMs: 180000 # Timeout for assistant operations
# # Should only be one or the other, either `supportedIds` or `excludedIds`
# supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"]
# # excludedIds: ["asst_excludedAssistantId"]
# # Only show assistants that the user created or that were created externally (e.g. in Assistants playground).
# # privateAssistants: false # Does not work with `supportedIds` or `excludedIds`
# # (optional) Models that support retrieval, will default to latest known OpenAI models that support the feature
# retrievalModels: ["gpt-4-turbo-preview"]
# # (optional) Assistant Capabilities available to all users. Omit the ones you wish to exclude. Defaults to list below.
# capabilities: ["code_interpreter", "retrieval", "actions", "tools", "image_vision"]
# agents:
# # (optional) Default recursion depth for agents, defaults to 25
# recursionLimit: 50
# # (optional) Max recursion depth for agents, defaults to 25
# maxRecursionLimit: 100
# # (optional) Disable the builder interface for agents
# disableBuilder: false
# # (optional) Agent Capabilities available to all users. Omit the ones you wish to exclude. Defaults to list below.
# capabilities: ["execute_code", "file_search", "actions", "tools"]
endpoints:
azureOpenAI:
titleConvo: true
titleModel: "gpt-4o-mini"
summarize: true
summaryModel: "gpt-4o-mini"
plugins: true
assistants: false
groups:
- group: "eastus2"
apiKey: ${EASTUS_API_KEY}
instanceName: ${INSTANCE_NAME}
version: "2024-12-01-preview"
models:
gpt-5:
deploymentName: "gpt-5"
version: "2025-08-07"
gpt-5-mini:
deploymentName: "gpt-5-mini"
version: "2025-08-07"
gpt-5-nano:
deploymentName: "gpt-5-nano"
version: "2025-08-07"
gpt-5-chat:
deploymentName: "gpt-5-chat"
version: "2025-08-07"
gpt-4o:
deploymentName: "gpt-4o"
version: "2024-11-20"
gpt-4o-mini:
deploymentName: "gpt-4o-mini"
version: "2024-07-18"
gpt-4.1:
deploymentName: "gpt-4.1"
version: "2025-04-14"
gpt-4.1-nano:
deploymentName: "gpt-4.1-nano"
version: "2025-04-14"
gpt-4.1-mini:
deploymentName: "gpt-4.1-mini"
version: "2025-04-14"
gpt-image-1:
deploymentName: "gpt-image-1"
version: "2025-04-15"
o1:
deploymentName: "o1"
version: "2024-12-17"
o3-mini:
deploymentName: "o3-mini"
version: "2025-01-31"
o3:
deploymentName: "o3"
version: "2025-04-16"
o4-mini:
deploymentName: "o4-mini"
version: "2025-04-16"
gpt-4o-transcribe:
deploymentName: "gpt-4o-transcribe"
version: "2025-03-20"
gpt-4o-mini-transcribe:
deploymentName: "gpt-4o-mini-transcribe"
version: "2025-03-20"
gpt-4o-mini-tts:
deploymentName: "gpt-4o-mini-tts"
version: "2025-03-20"
# fileConfig:
# endpoints:
# assistants:
# fileLimit: 5
# fileSizeLimit: 10 # Maximum size for an individual file in MB
# totalSizeLimit: 50 # Maximum total size for all files in a single request in MB
# supportedMimeTypes:
# - "image/.*"
# - "application/pdf"
# openAI:
# disabled: true # Disables file uploading to the OpenAI endpoint
# default:
# totalSizeLimit: 20
# YourCustomEndpointName:
# fileLimit: 2
# fileSizeLimit: 5
# serverFileSizeLimit: 100 # Global server file size limit in MB
# avatarSizeLimit: 2 # Limit for user avatar image size in MB
# imageGeneration: # Image Gen settings, either percentage or px
# percentage: 100
# px: 1024
# # Client-side image resizing to prevent upload errors
# clientImageResize:
# enabled: false # Enable/disable client-side image resizing (default: false)
# maxWidth: 1900 # Maximum width for resized images (default: 1900)
# maxHeight: 1900 # Maximum height for resized images (default: 1900)
# quality: 0.92 # JPEG quality for compression (0.0-1.0, default: 0.92)
# # See the Custom Configuration Guide for more information on Assistants Config:
# # https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/assistants_endpoint
# Memory configuration for user memories
# memory:
# # (optional) Disable memory functionality
# disabled: false
# # (optional) Restrict memory keys to specific values to limit memory storage and improve consistency
# validKeys: ["preferences", "work_info", "personal_info", "skills", "interests", "context"]
# # (optional) Maximum token limit for memory storage (not yet implemented for token counting)
# tokenLimit: 10000
# # (optional) Enable personalization features (defaults to true if memory is configured)
# # When false, users will not see the Personalization tab in settings
# personalize: true
# # Memory agent configuration - either use an existing agent by ID or define inline
# agent:
# # Option 1: Use existing agent by ID
# id: "your-memory-agent-id"
# # Option 2: Define agent inline
# # provider: "openai"
# # model: "gpt-4o-mini"
# # instructions: "You are a memory management assistant. Store and manage user information accurately."
# # model_parameters:
# # temperature: 0.1 |
Beta Was this translation helpful? Give feedback.
-
Did some additional testing:
Still no dice |
Beta Was this translation helpful? Give feedback.
-
I also spent few hours and got 404 errror. I have It is enough to run chat from python client but I can't configure LibreChat with these params. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hello, I have been trying to use the azure configuration with a custom base url and it doesn't work. I've followed this example:
The issue seems to be in the extractBaseURL method in extractBaseURL.js file. The baseURL ignores everything after "azure-openai". So the instance name and deployment name variables are ignored. Right now, I modified the extractBaseURL method to return the baseURL as it was provided in the YML file. Any ideas? Thank you |
Beta Was this translation helpful? Give feedback.
Got it working! So the issue was even though on Az OpenAI the model version is specified as
2024-07-18
, I cannot set this inlibrechat.yaml
for a specific model.Omitting
baseURL
is also key, if that is there the same error occurs.Glad it's finally working! Thanks for your help! 😄