You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Provide support for Azure AI Studio so bot OpenAI-GPT models and marketplace models like Mistral can be loaded from the same Azure AI Studio project
#28105
I searched existing ideas and did not find a similar one
I added a very descriptive title
I've clearly described the feature request and motivation for it
Feature request
Currently we have ChatMistralAI and AzureChatOpenAI to load either Mistral or GPT models. In Azure AI Studio you can now deploy GPT-models, Mistral-models, Ollama-models and many more in one resource.
It would be really nice to have away of creating an LLM-object from Azure AI studio without having to adapt the code to the specific type of deployment.
Motivation
I want to create chatbot with Langchain and Langgraph and be able to switch which LLM-model is used by simply changing my app-settings and not having to redeploy.
Currently this is achievable by adding a new variable in my app-settings that says which type of model should be created, check that variable and then run either the code for GPT- or Mistral-model creation.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Checked
Feature request
Currently we have ChatMistralAI and AzureChatOpenAI to load either Mistral or GPT models. In Azure AI Studio you can now deploy GPT-models, Mistral-models, Ollama-models and many more in one resource.
It would be really nice to have away of creating an LLM-object from Azure AI studio without having to adapt the code to the specific type of deployment.
Motivation
I want to create chatbot with Langchain and Langgraph and be able to switch which LLM-model is used by simply changing my app-settings and not having to redeploy.
Currently this is achievable by adding a new variable in my app-settings that says which type of model should be created, check that variable and then run either the code for GPT- or Mistral-model creation.
Proposal (If applicable)
No response
Beta Was this translation helpful? Give feedback.
All reactions