[Enhancement]: Centralization and Enhancement of LLM Model Management in Azure AI Foundry #6043
jmartinezb3
started this conversation in
Feature Requests & Suggestions
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What features would you like to see added?
It would be great if Azure AI Foundry models could be managed with a single specialized endpoint configuration, similar to Azure OpenAI. Having just one endpoint for a single model (like DeepSeek on Azure) is not ideal—it works as a solution but not as a definitive one. In Azure, there are many hosted LLMs such as Phi, LLaMA, etc., and even Stable Diffusion, which could be managed through a specialized endpoint. OR, it would be helpful to have the option to configure multiple
baseURLs
for each model, separated bygroup
or something similar, in order to manage multiple Azure AI Foundry models within the same custom endpoint.In #5924 this issue arose, and I thought it made sense to propose this improvement.
More details
N/A
Which components are impacted by your request?
Custom Endpoint managing.
Pictures
No response
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions