You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-mistral.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: How to use Mistral premium chat models with Azure AI Studio
2
+
title: How to use Mistral premium chat models with Azure AI Foundry
3
3
titleSuffix: Azure AI Foundry
4
-
description: Learn how to use Mistral premium chat models with Azure AI Studio.
4
+
description: Learn how to use Mistral premium chat models with Azure AI Foundry.
5
5
ms.service: azure-ai-studio
6
6
manager: scottpolly
7
7
ms.topic: how-to
@@ -33,7 +33,7 @@ The Mistral premium chat models include the following models:
33
33
34
34
Mistral Large models are Mistral AI's most advanced Large Language Models (LLM). They can be used on any language-based task, thanks to their state-of-the-art reasoning, knowledge, and coding capabilities. Several Mistral Large model variants are available, and their attributes are as follows.
35
35
36
-
Attributes of **Mistral Large** include:
36
+
Attributes of **Mistral Large (2402)**, also abbreviated as Mistral Large, include:
37
37
38
38
***Specialized in RAG**. Crucial information isn't lost in the middle of long context windows (up to 32-K tokens).
39
39
***Strong in coding**. Code generation, review, and comments. Supports all mainstream coding languages.
@@ -99,15 +99,15 @@ The following models are available:
99
99
100
100
## Prerequisites
101
101
102
-
To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
102
+
To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
103
103
104
104
### A model deployment
105
105
106
106
**Deployment to serverless APIs**
107
107
108
108
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
109
109
110
-
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
110
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
111
111
112
112
> [!div class="nextstepaction"]
113
113
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -133,7 +133,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
133
133
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
134
134
135
135
> [!TIP]
136
-
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
136
+
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry with the same code and structure, including Mistral premium chat models.
137
137
138
138
### Create a client to consume the model
139
139
@@ -523,7 +523,7 @@ The Mistral premium chat models include the following models:
523
523
524
524
Mistral Large models are Mistral AI's most advanced Large Language Models (LLM). They can be used on any language-based task, thanks to their state-of-the-art reasoning, knowledge, and coding capabilities. Several Mistral Large model variants are available, and their attributes are as follows.
525
525
526
-
Attributes of **Mistral Large** include:
526
+
Attributes of **Mistral Large (2402)**, also abbreviated as Mistral Large, include:
527
527
528
528
***Specialized in RAG**. Crucial information isn't lost in the middle of long context windows (up to 32-K tokens).
529
529
***Strong in coding**. Code generation, review, and comments. Supports all mainstream coding languages.
@@ -589,15 +589,15 @@ The following models are available:
589
589
590
590
## Prerequisites
591
591
592
-
To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
592
+
To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
593
593
594
594
### A model deployment
595
595
596
596
**Deployment to serverless APIs**
597
597
598
598
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
599
599
600
-
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
600
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
601
601
602
602
> [!div class="nextstepaction"]
603
603
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
622
622
623
623
> [!TIP]
624
-
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
624
+
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry with the same code and structure, including Mistral premium chat models.
625
625
626
626
### Create a client to consume the model
627
627
@@ -1032,7 +1032,7 @@ The Mistral premium chat models include the following models:
1032
1032
1033
1033
Mistral Large models are Mistral AI's most advanced Large Language Models (LLM). They can be used on any language-based task, thanks to their state-of-the-art reasoning, knowledge, and coding capabilities. Several Mistral Large model variants are available, and their attributes are as follows.
1034
1034
1035
-
Attributes of **Mistral Large** include:
1035
+
Attributes of **Mistral Large (2402)**, also abbreviated as Mistral Large, include:
1036
1036
1037
1037
* **Specialized in RAG**. Crucial information isn't lost in the middle of long context windows (up to 32-K tokens).
1038
1038
* **Strong in coding**. Code generation, review, and comments. Supports all mainstream coding languages.
@@ -1098,15 +1098,15 @@ The following models are available:
1098
1098
1099
1099
## Prerequisites
1100
1100
1101
-
To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
1101
+
To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
1102
1102
1103
1103
### A model deployment
1104
1104
1105
1105
**Deployment to serverless APIs**
1106
1106
1107
1107
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
1108
1108
1109
-
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
1109
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
1110
1110
1111
1111
> [!div class="nextstepaction"]
1112
1112
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1153,7 +1153,7 @@ using System.Reflection;
1153
1153
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
1154
1154
1155
1155
> [!TIP]
1156
-
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
1156
+
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry with the same code and structure, including Mistral premium chat models.
1157
1157
1158
1158
### Create a client to consume the model
1159
1159
@@ -1563,7 +1563,7 @@ The Mistral premium chat models include the following models:
1563
1563
1564
1564
Mistral Large models are Mistral AI's most advanced Large Language Models (LLM). They can be used on any language-based task, thanks to their state-of-the-art reasoning, knowledge, and coding capabilities. Several Mistral Large model variants are available, and their attributes are as follows.
1565
1565
1566
-
Attributes of **Mistral Large** include:
1566
+
Attributes of **Mistral Large (2402)**, also abbreviated as Mistral Large, include:
1567
1567
1568
1568
* **Specialized in RAG**. Crucial information isn't lost in the middle of long context windows (up to 32-K tokens).
1569
1569
* **Strong in coding**. Code generation, review, and comments. Supports all mainstream coding languages.
@@ -1629,15 +1629,15 @@ The following models are available:
1629
1629
1630
1630
## Prerequisites
1631
1631
1632
-
To use Mistral premium chat models with Azure AI Studio, you need the following prerequisites:
1632
+
To use Mistral premium chat models with Azure AI Foundry, you need the following prerequisites:
1633
1633
1634
1634
### A model deployment
1635
1635
1636
1636
**Deployment to serverless APIs**
1637
1637
1638
1638
Mistral premium chat models can be deployed to serverless API endpoints with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need.
1639
1639
1640
-
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Studio, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
1640
+
Deployment to a serverless API endpoint doesn't require quota from your subscription. If your model isn't deployed already, use the Azure AI Foundry portal, Azure Machine Learning SDK for Python, the Azure CLI, or ARM templates to [deploy the model as a serverless API](deploy-models-serverless.md).
1641
1641
1642
1642
> [!div class="nextstepaction"]
1643
1643
> [Deploy the model to serverless API endpoints](deploy-models-serverless.md)
@@ -1654,7 +1654,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
1654
1654
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a chat completions model for chat.
1655
1655
1656
1656
> [!TIP]
1657
-
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Studio with the same code and structure, including Mistral premium chat models.
1657
+
> The [Azure AI model inference API](https://aka.ms/azureai/modelinference) allows you to talk with most models deployed in Azure AI Foundry with the same code and structure, including Mistral premium chat models.
1658
1658
1659
1659
### Create a client to consume the model
1660
1660
@@ -2229,7 +2229,7 @@ For more examples of how to use Mistral models, see the following examples and t
2229
2229
2230
2230
Quota is managed per deployment. Each deployment has a rate limit of200,000 tokens per minute and 1,000API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
2231
2231
2232
-
Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
2232
+
Mistral models deployed as a serverless API are offered by MistralAI through the Azure Marketplace and integrated with Azure AI Foundry for use. You can find the Azure Marketplace pricing when deploying the model.
2233
2233
2234
2234
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
Copy file name to clipboardExpand all lines: articles/ai-studio/includes/region-availability-maas.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,8 +60,8 @@ Phi-3-Medium-4K-Instruct <br> Phi-3-Medium-128K-Instruct | Not applicable | E
60
60
Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
61
61
Ministral-3B | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
62
62
Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
63
-
Mistral Large (2402) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
64
-
Mistral-Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
63
+
Mistral Large (2402) <br> Mistral-Large (2407) <br> Mistral-Large (2411) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel | East US <br> East US 2 <br> North Central US <br> South Central US <br> Sweden Central <br> West US <br> West US 3 | Not available |
0 commit comments