You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/deployments-overview.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,9 +38,9 @@ Azure OpenAI allows you to get access to the latest OpenAI models with the enter
38
38
39
39
The model catalog offers access to a large variety of models across different modalities. Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need.
40
40
41
-
#### Deploy models with model as a service
41
+
#### Deploy models with Model as a Service (Maas)
42
42
43
-
This deployment option doesn't require quota from your subscription. You're billed per token in a pay-as-you-go fashion. Learn how to deploy and consume [Llama 2 model family](../how-to/deploy-models-llama.md) with model as a service.
43
+
This deployment option doesn't require quota from your subscription. You deploy as a Serverless API deployment and are billed per token in a pay-as-you-go fashion. Learn how to deploy and consume [Llama 2 model family](../how-to/deploy-models-llama.md) with model as a service.
44
44
45
45
#### Deploy models with hosted managed infrastructure
46
46
@@ -50,7 +50,7 @@ You can also host open models in your own subscription with managed infrastructu
50
50
51
51
The following table describes how you're billed for deploying and inferencing LLMs in Azure AI Studio. See [monitor costs for models offered throughout the Azure Marketplace](../how-to/costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace) to learn more about how to track costs.
52
52
53
-
| Use case | Azure OpenAI models | Models deployed with pay-as-you-go | Models deployed to real-time endpoints|
53
+
| Use case | Azure OpenAI models | Models deployed as Serverless APIs (pay-as-you-go)| Models deployed with managed compute|
54
54
| --- | --- | --- | --- |
55
55
| Deploying a model from the model catalog to your project | No, you aren't billed for deploying an Azure OpenAI model to your project. | Yes, you're billed per the infrastructure of the endpoint<sup>1</sup> | Yes, you're billed for the infrastructure hosting the model<sup>2</sup> |
56
56
| Testing chat mode on Playground after deploying a model to your project | Yes, you're billed based on your token usage | Yes, you're billed based on your token usage | None. |
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-cohere-command.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,7 +102,7 @@ To create a deployment:
102
102
1. Return to the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**. For more information on using the APIs, see the [reference](#reference-for-cohere-models-deployed-as-a-service) section.
103
103
1. You can always find the endpoint's details, URL, and access keys by navigating to your **Project overview** page. Then, from the left sidebar of your project, select **Components** > **Deployments**.
104
104
105
-
To learn about billing for the Cohere models deployed as a serverless API with pay-as-you-go token-based billing, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
105
+
To learn about billing for the Cohere models deployed as a serverless API with pay-as-you-go token-based billing, see [Cost and quota considerations for models deployed as a serverless API](#cost-and-quota-considerations-for-models-deployed-as-a-serverless-api).
### Cost and quota considerations for models deployed as a service
669
+
### Cost and quota considerations for models deployed as a serverless API
670
670
671
-
Cohere models deployed as a service are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
671
+
Cohere models deployed as a serverless API with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
672
672
673
673
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
674
674
@@ -678,7 +678,7 @@ Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
678
678
679
679
## Content filtering
680
680
681
-
Models deployed as a service with pay-as-you-go billing are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
681
+
Models deployed as a serverless API with pay-as-you-go billing are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
### Cost and quota considerations for models deployed as a service
284
284
285
-
Cohere models deployed as a service are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
285
+
Cohere models deployed as a serverless API with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
286
286
287
287
Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
288
288
@@ -292,7 +292,7 @@ Quota is managed per deployment. Each deployment has a rate limit of 200,000 tok
292
292
293
293
## Content filtering
294
294
295
-
Models deployed as a service with pay-as-you-go billing are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
295
+
Models deployed as a serverless API are protected by [Azure AI Content Safety](../../ai-services/content-safety/overview.md). With Azure AI content safety, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [content filtering here](../concepts/content-filtering.md).
In this article, you learn about the Meta Llama models. You also learn how to use Azure AI Studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
20
+
In this article, you learn about the Meta Llama models. You also learn how to use Azure AI Studio to deploy models from this set either to serverless APIs with pay-as you go billing or to managed compute.
21
21
22
22
> [!IMPORTANT]
23
23
> Read more about the announcement of Meta Llama 3 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/Llama3Announcement) and from [Meta Announcement Blog](https://aka.ms/meta-llama3-announcement-blog).
24
24
25
25
Meta Llama 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
26
26
27
-
## Deploy Meta Llama models with pay-as-you-go
27
+
## Deploy Meta Llama models as a serverless API
28
28
29
-
Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
29
+
Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
30
30
31
-
Meta Llama 3 models are deployed as a service with pay-as-you-go through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
31
+
Meta Llama 3 models are deployed as a serverless API with pay-as-you-go billing through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
32
32
33
33
### Azure Marketplace model offerings
34
34
@@ -41,7 +41,7 @@ The following models are available in Azure Marketplace for Llama 3 when deploye
41
41
42
42
# [Meta Llama 2](#tab/llama-two)
43
43
44
-
The following models are available in Azure Marketplace for Llama 3 when deployed as a service with pay-as-you-go:
44
+
The following models are available in Azure Marketplace for Llama 3 when deployed as a serverless API:
45
45
46
46
* Meta Llama-2-7B (preview)
47
47
* Meta Llama 2 7B-Chat (preview)
@@ -52,7 +52,7 @@ The following models are available in Azure Marketplace for Llama 3 when deploye
52
52
53
53
---
54
54
55
-
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-meta-llama-models-to-real-time-endpoints) instead.
55
+
If you need to deploy a different model, [deploy it to managed compute](#deploy-meta-llama-models-to-managed-compute) instead.
56
56
57
57
### Prerequisites
58
58
@@ -125,7 +125,7 @@ To create a deployment:
125
125
126
126
Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select **Deployments** > **+ Create**.
127
127
128
-
1. On the model's **Details** page, select **Deploy** and then select **Pay-as-you-go**.
128
+
1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety**.
129
129
130
130
1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **Sweden Central** region.
131
131
1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
@@ -157,7 +157,7 @@ To create a deployment:
157
157
158
158
Alternatively, you can initiate deployment by starting from your project in AI Studio. From the **Build** tab of your project, select **Deployments** > **+ Create**.
159
159
160
-
1. On the model's **Details** page, select **Deploy** and then select **Pay-as-you-go**.
160
+
1. On the model's **Details** page, select **Deploy** and then select **Serverless API with Azure AI Content Safety**.
161
161
162
162
:::image type="content" source="../media/deploy-monitor/llama/deploy-pay-as-you-go.png" alt-text="A screenshot showing how to deploy a model with the pay-as-you-go option." lightbox="../media/deploy-monitor/llama/deploy-pay-as-you-go.png":::
163
163
@@ -193,7 +193,7 @@ To learn about billing for Llama models deployed with pay-as-you-go, see [Cost a
193
193
194
194
Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
195
195
196
-
1.On the**Build** page, select **Deployments**.
196
+
1.From your**Project overview** page, go to the left sidebar and select**Components** >**Deployments**.
197
197
198
198
1. Find and select the deployment you created.
199
199
@@ -213,7 +213,7 @@ Models deployed as a service can be consumed using either the chat or the comple
213
213
214
214
Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
215
215
216
-
1.On the**Build** page, select **Deployments**.
216
+
1.From your**Project overview** page, go to the left sidebar and select**Components** >**Deployments**.
217
217
218
218
1. Find and select the deployment you created.
219
219
@@ -483,9 +483,9 @@ The following is an example response:
483
483
}
484
484
```
485
485
486
-
## Deploy Meta Llama models to real-time endpoints
486
+
## Deploy Meta Llama models to managed compute
487
487
488
-
Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama models to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
488
+
Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama models to managed compute in AI Studio. When deployed to managed compute, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to managed compute consume quota from your subscription. All the models in the Llama family can be deployed to managed compute.
489
489
490
490
Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
491
491
@@ -520,9 +520,9 @@ Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time en
520
520
521
521
1. Select the **Consume** tab of the deployment to obtain code samples that can be used to consume the deployed model in your application.
522
522
523
-
### Consume Llama 2 models deployed to real-time endpoints
523
+
### Consume Llama 2 models deployed to managed compute
524
524
525
-
For reference about how to invoke Llama models deployed to real-time endpoints, see the model's card in the Azure AI Studio [model catalog](../how-to/model-catalog-overview.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
525
+
For reference about how to invoke Llama models deployed to managed compute, see the model's card in the Azure AI Studio [model catalog](../how-to/model-catalog-overview.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
526
526
527
527
## Cost and quotas
528
528
@@ -538,13 +538,13 @@ For more information on how to track costs, see [monitor costs for models offere
538
538
539
539
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
540
540
541
-
### Cost and quota considerations for Llama models deployed as real-time endpoints
541
+
### Cost and quota considerations for Llama models deployed as managed compute
542
542
543
-
For deployment and inferencing of Llama models with real-time endpoints, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
543
+
For deployment and inferencing of Llama models with managed compute, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure AI Studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
544
544
545
545
## Content filtering
546
546
547
-
Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
547
+
Models deployed as a serverless API with pay-as-you-go are protected by Azure AI Content Safety. When deployed to managed compute, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
0 commit comments