You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -90,6 +91,16 @@ Core42 includes autoregressive bi-lingual LLMs for Arabic & English with state-o
90
91
91
92
See [this model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=core42).
92
93
94
+
### DeepSeek
95
+
96
+
DeepSeek family of models include DeepSeek-R1, which excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks.
97
+
98
+
| Model | Type | Tier | Capabilities |
99
+
| ------ | ---- | --- | ------------ |
100
+
|[DeekSeek-R1](https://ai.azure.com/explore/models/deepseek-r1/version/1/registry/azureml-deepseek)| chat-completion <br /> [(with reasoning content)](../how-to/use-chat-reasoning.md)| Global standard | - **Input:** text (16,384 tokens) <br /> - **Output:** (163,840 tokens) <br /> - **Languages:**`en` and `zh` <br /> - **Tool calling:** No <br /> - **Response formats:** Text. |
101
+
102
+
See [this model collection in Azure AI Foundry portal](https://ai.azure.com/explore/models?&selectedCollection=deepseek).
103
+
93
104
### Meta
94
105
95
106
Meta Llama models and tools are a collection of pretrained and fine-tuned generative AI text and image reasoning models. Meta models range is scale to include:
@@ -140,10 +151,10 @@ Mistral AI offers two categories of models: premium models including Mistral Lar
140
151
| Model | Type | Tier | Capabilities |
141
152
| ------ | ---- | --- | ------------ |
142
153
|[Ministral-3B](https://ai.azure.com/explore/models/Ministral-3B/version/1/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (131,072 tokens) <br /> - **Output:** text (4,096 tokens) <br /> - **Languages:** fr, de, es, it, and en <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
143
-
|[Mistral-large](https://ai.azure.com/explore/models/Mistral-large/version/1/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (32,768 tokens) <br /> - **Output:** (4,096 tokens) <br /> - **Languages:** fr, de, es, it, and en <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
154
+
|[Mistral-large](https://ai.azure.com/explore/models/Mistral-large/version/1/registry/azureml-mistral)<br /> (deprecated) | chat-completion | Global standard | - **Input:** text (32,768 tokens) <br /> - **Output:** (4,096 tokens) <br /> - **Languages:** fr, de, es, it, and en <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
144
155
|[Mistral-small](https://ai.azure.com/explore/models/Mistral-small/version/1/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (32,768 tokens) <br /> - **Output:** text (4,096 tokens) <br /> - **Languages:** fr, de, es, it, and en <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
145
156
|[Mistral-Nemo](https://ai.azure.com/explore/models/Mistral-Nemo/version/1/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (131,072 tokens) <br /> - **Output:** text (4,096 tokens) <br /> - **Languages:** en, fr, de, es, it, zh, ja, ko, pt, nl, and pl <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
146
-
|[Mistral-large-2407](https://ai.azure.com/explore/models/Mistral-large-2407/version/1/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (131,072 tokens) <br /> - **Output:** (4,096 tokens) <br /> - **Languages:** en, fr, de, es, it, zh, ja, ko, pt, nl, and pl <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
157
+
|[Mistral-large-2407](https://ai.azure.com/explore/models/Mistral-large-2407/version/1/registry/azureml-mistral)<br /> (legacy) | chat-completion | Global standard | - **Input:** text (131,072 tokens) <br /> - **Output:** (4,096 tokens) <br /> - **Languages:** en, fr, de, es, it, zh, ja, ko, pt, nl, and pl <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
147
158
|[Mistral-Large-2411](https://ai.azure.com/explore/models/Mistral-Large-2411/version/2/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (128,000 tokens) <br /> - **Output:** text (4,096 tokens) <br /> - **Languages:** en, fr, de, es, it, zh, ja, ko, pt, nl, and pl <br /> - **Tool calling:** Yes <br /> - **Response formats:** Text, JSON |
148
159
|[Codestral-2501](https://ai.azure.com/explore/models/Codestral-2501/version/2/registry/azureml-mistral)| chat-completion | Global standard | - **Input:** text (262,144 tokens) <br /> - **Output:** text (4,096 tokens) <br /> - **Languages:** en <br /> - **Tool calling:** No <br /> - **Response formats:** Text |
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/how-to/inference.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,6 +48,14 @@ For a chat model, you can create a request as follows:
48
48
49
49
If you specify a model name that doesn't match any given model deployment, you get an error that the model doesn't exist. You can control which models are available for users by creating model deployments as explained at [add and configure model deployments](create-model-deployments.md).
50
50
51
+
## Key-less authentication
52
+
53
+
Models deployed to Azure AI model inference in Azure AI Services support key-less authorization using Microsoft Entra ID. Key-less authorization enhances security, simplifies the user experience, reduces operational complexity, and provides robust compliance support for modern development. It makes it a strong choice for organizations adopting secure and scalable identity management solutions.
54
+
55
+
To use key-less authentication, [configure your resource and grant access to users](configure-entra-id.md) to perform inference. Once configured, then you can authenticate as follows:
* Azure OpenAI Batch can't be used with the Azure AI model inference endpoint. You have to use the dedicated deployment URL as explained at [Batch API support in Azure OpenAI documentation](../../../ai-services/openai/how-to/batch.md#api-support).
@@ -56,4 +64,4 @@ If you specify a model name that doesn't match any given model deployment, you g
0 commit comments