Configurations for DeepSeek with Azure AI Foundry #5924
Replies: 8 comments 9 replies
-
I'm also looking for a working example of using the Azure DeepSeek R1 model on Librechat, let me know if you firgure it out. |
Beta Was this translation helpful? Give feedback.
-
I think the issue is in this parameter:
I tried different variations, but got the same error each time. If anyone knows the correct way, please post it here. |
Beta Was this translation helpful? Give feedback.
-
This works as a custom endpoint, I know it can be done with Azure Serverless config, too, but don't have that handy: endpoints:
# other configs...
custom:
# custom endpoints...
- name: "Azure (DeepSeek)"
apiKey: "${AZURE_DEEPSEEK_API_KEY}"
baseURL: "https://DeepSeek-R1-YOUR-ENDPOINT-FROM-AZURE.models.ai.azure.com/v1/"
models:
default: [
"DeepSeek-R1",
]
fetch: false
titleConvo: true
titleModel: current_model
dropParams: ["stop", "user", "frequency_penalty", "presence_penalty"]
modelDisplayLabel: "DeepSeek-R1" needs |
Beta Was this translation helpful? Give feedback.
-
Thank you very much, Danny! I just tested serverless configurations, and they worked too!
|
Beta Was this translation helpful? Give feedback.
-
here is a serverless working endpoint from Azure AI Foundry - group: "DeepSeek-R1"
apiKey: "${AZURE_DEEPSEEK_API_KEY}"
baseURL: "https://xxxxxxxxxxxxxxx.services.ai.azure.com/models/"
version: "2024-05-01-preview"
serverless: true
models:
DeepSeek-R1: true |
Beta Was this translation helpful? Give feedback.
-
Hello Everyone Because I'm following exactly the explanations you provide on this thread but I'm having the following error : Version: LibreChat v0.7.7 using docker compose and @rubentalstra suggested librechat.yml config above
I tried everything that is explained in this discussion but nothing worked Thank you for your help |
Beta Was this translation helpful? Give feedback.
-
I’ve tried both the stateless and custom endpoint setups on Azure AI Foundry, but unfortunately deepseek not working on my side yet—I keep hitting that 401 error with no response. If you’re facing the same, you can visit here for more info or to see if there’s a fix coming. Hopefully, someone who’s got it running can share their setup soon! |
Beta Was this translation helpful? Give feedback.
-
@danny-avila May I know for now, what is the most correct way to config, I use custom endpoint it is working
Note that I change the base url from "https://DeepSeek-R1-YOUR-ENDPOINT-FROM-AZURE.models.ai.azure.com/v1/" to "https://DeepSeek-R1-YOUR-ENDPOINT-FROM-AZURE.models.ai.azure.com/models/" Also "dropParams" seems like some tricky logic here |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I get 401 status code error (no body) for DeepSeek model that is set up on Azure with Azure AI Foundry.
Here is my librechat.yaml:
I made a change that has not changed the behavior and error, but looks better:
I also tried serverless option, but it did not work at all. Not sure how serverless configurations should go together with other Azure OpenAI endpoints.
Beta Was this translation helpful? Give feedback.
All reactions