You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# How to deploy Llama 2 family of large language models with Azure Machine Learning studio
20
+
# How to deploy Llama family of large language models with Azure Machine Learning studio
21
21
22
-
In this article, you learn about the Llama 2 family of large language models (LLMs). You also learn how to use Azure Machine Learning studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
22
+
In this article, you learn about the Llama family of large language models (LLMs). You also learn how to use Azure Machine Learning studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
23
23
24
-
The Llama 2 family of LLMs is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-2-chat.
24
+
> [!IMPORTANT]
25
+
> Read more about the Llama 3 on Azure AI Model Catalog announcement from [Microsoft](https://aka.ms/Llama3Announcement) and from [Meta](https://aka.ms/meta-llama3-announcement-blog).
26
+
27
+
The Llama family of LLMs is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-3-chat. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
31
34
32
-
Llama 2 models deployed as a service with pay-as-you-go are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
35
+
Llama models deployed as a service with pay-as-you-go are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
33
36
34
37
### Azure Marketplace model offerings
35
38
36
-
The following models are available in Azure Marketplace for Llama 2 when deployed as a service with pay-as-you-go:
39
+
The following models are available in Azure Marketplace for Llama when deployed as a service with pay-as-you-go:
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-models-to-real-time-endpoints) instead.
49
+
50
+
# [Llama 2](#tab/llama-two)
37
51
38
52
* Meta Llama-2-7B (preview)
39
53
* Meta Llama 2 7B-Chat (preview)
@@ -42,7 +56,9 @@ The following models are available in Azure Marketplace for Llama 2 when deploye
42
56
* Meta Llama-2-70B (preview)
43
57
* Meta Llama 2 70B-Chat (preview)
44
58
45
-
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-2-models-to-real-time-endpoints) instead.
59
+
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-models-to-real-time-endpoints) instead.
60
+
61
+
---
46
62
47
63
### Prerequisites
48
64
@@ -75,6 +91,34 @@ If you need to deploy a different model, [deploy it to real-time endpoints](#dep
75
91
76
92
To create a deployment:
77
93
94
+
# [Llama 3](#tab/llama-three)
95
+
96
+
1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
97
+
1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** region.
98
+
1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
99
+
100
+
Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
101
+
102
+
1. On the model's overview page, select **Deploy** and then **Pay-as-you-go**.
103
+
104
+
1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
105
+
1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering (for example, Llama-3-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
106
+
107
+
> [!NOTE]
108
+
> Subscribing a workspace to a particular Azure Marketplace offering (in this case, Llama-3-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
109
+
110
+
1. Once you sign up the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent deployments. If this scenario applies to you, select **Continue to deploy**.
111
+
112
+
1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
113
+
114
+
1. Select **Deploy**. Wait until the deployment is finished and you're redirected to the serverless endpoints page.
115
+
1. Select the endpoint to open its Details page.
116
+
1. Select the **Test** tab to start interacting with the model.
117
+
1. You can also take note of the **Target** URL and the **Secret Key** to call the deployment and generate completions.
118
+
1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
119
+
120
+
# [Llama 2](#tab/llama-two)
121
+
78
122
1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
79
123
1. Select the workspace in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **West US 3** region.
80
124
1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
@@ -107,12 +151,28 @@ To create a deployment:
107
151
1. You can also take note of the **Target** URL and the **Secret Key** to call the deployment and generate completions.
108
152
1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
109
153
110
-
To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 2 models deployed as a service](#cost-and-quota-considerations-for-llama-2-models-deployed-as-a-service).
154
+
---
111
155
112
-
### Consume Llama 2 models as a service
156
+
To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-3-models-deployed-as-a-service).
157
+
158
+
### Consume Llama models as a service
113
159
114
160
Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
115
161
162
+
# [Llama 3](#tab/llama-three)
163
+
164
+
1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
165
+
1. Find and select the deployment you created.
166
+
1. Copy the **Target** URL and the **Key** token values.
167
+
1. Make an API request based on the type of model you deployed.
168
+
169
+
- For completions models, such as `Llama-3-8b`, use the [`<target_url>/v1/completions`](#completions-api) API.
170
+
- For chat models, such as `Llama-3-8b-chat`, use the [`<target_url>/v1/chat/completions`](#chat-api) API.
171
+
172
+
For more information on using the APIs, see the [reference](#reference-for-llama-models-deployed-as-a-service) section.
173
+
174
+
# [Llama 2](#tab/llama-two)
175
+
116
176
1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
117
177
1. Find and select the deployment you created.
118
178
1. Copy the **Target** URL and the **Key** token values.
@@ -121,9 +181,11 @@ Models deployed as a service can be consumed using either the chat or the comple
121
181
- For completions models, such as `Llama-2-7b`, use the [`<target_url>/v1/completions`](#completions-api) API.
122
182
- For chat models, such as `Llama-2-7b-chat`, use the [`<target_url>/v1/chat/completions`](#chat-api) API.
123
183
124
-
For more information on using the APIs, see the [reference](#reference-for-llama-2-models-deployed-as-a-service) section.
184
+
For more information on using the APIs, see the [reference](#reference-for-llama-models-deployed-as-a-service) section.
185
+
186
+
---
125
187
126
-
### Reference for Llama 2 models deployed as a service
188
+
### Reference for Llama models deployed as a service
127
189
128
190
#### Completions API
129
191
@@ -372,12 +434,45 @@ The following is an example response:
372
434
}
373
435
```
374
436
375
-
## Deploy Llama 2 models to real-time endpoints
437
+
## Deploy Llama models to real-time endpoints
376
438
377
-
Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama 2 models to real-time endpoints in Azure Machine Learning studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
439
+
Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama 3 models to real-time endpoints in Azure Machine Learning studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
378
440
379
441
### Create a new deployment
380
442
443
+
# [Llama 3](#tab/llama-three)
444
+
445
+
Follow these steps to deploy a model such as `Llama-3-7b-chat` to a real-time endpoint in [Azure Machine Learning studio](https://ml.azure.com).
446
+
447
+
1. Select the workspace in which you want to deploy the model.
448
+
1. Choose the model that you want to deploy from the studio's [model catalog](https://ml.azure.com/model/catalog).
449
+
450
+
Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **real-time endpoints** > **Create**.
451
+
452
+
1. On the model's overview page, select **Deploy** and then **Real-time endpoint**.
453
+
454
+
1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI.
455
+
456
+
> [!TIP]
457
+
> In general, we recommend that you select **Enable Azure AI Content Safety (Recommended)** for deployment of the Llama model. This deployment option is currently only supported using the Python SDK and it happens in a notebook.
458
+
459
+
1. Select **Proceed**.
460
+
461
+
> [!TIP]
462
+
> If you don't have enough quota available in the selected project, you can use the option **I want to use shared quota and I acknowledge that this endpoint will be deleted in 168 hours**.
463
+
464
+
1. Select the **Virtual machine** and the **Instance count** that you want to assign to the deployment.
465
+
1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resource configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
466
+
1. Indicate if you want to enable **Inferencing data collection (preview)**.
467
+
1. Indicate if you want to enable **Package Model (preview)**.
468
+
1. Select **Deploy**. After a few moments, the endpoint's **Details** page opens up.
469
+
1. Wait for the endpoint creation and deployment to finish. This step can take a few minutes.
470
+
1. Select the endpoint's **Consume** page to obtain code samples that you can use to consume the deployed model in your application.
471
+
472
+
For more information on how to deploy models to real-time endpoints, using the studio, see [Deploying foundation models to endpoints for inferencing](how-to-use-foundation-models.md#deploying-foundation-models-to-endpoints-for-inferencing).
473
+
474
+
# [Llama 2](#tab/llama-two)
475
+
381
476
Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time endpoint in [Azure Machine Learning studio](https://ml.azure.com).
382
477
383
478
1. Select the workspace in which you want to deploy the model.
@@ -401,21 +496,23 @@ Follow these steps to deploy a model such as `Llama-2-7b-chat` to a real-time en
401
496
402
497
1. Select the **Virtual machine** and the **Instance count** that you want to assign to the deployment.
403
498
1. Select if you want to create this deployment as part of a new endpoint or an existing one. Endpoints can host multiple deployments while keeping resource configuration exclusive for each of them. Deployments under the same endpoint share the endpoint URI and its access keys.
404
-
1. Indicate if you want to enable **Inferencing data collection**.
499
+
1. Indicate if you want to enable **Inferencing data collection (preview)**.
405
500
1. Indicate if you want to enable **Package Model (preview)**.
406
501
1. Select **Deploy**. After a few moments, the endpoint's **Details** page opens up.
407
502
1. Wait for the endpoint creation and deployment to finish. This step can take a few minutes.
408
503
1. Select the endpoint's **Consume** page to obtain code samples that you can use to consume the deployed model in your application.
409
504
410
505
For more information on how to deploy models to real-time endpoints, using the studio, see [Deploying foundation models to endpoints for inferencing](how-to-use-foundation-models.md#deploying-foundation-models-to-endpoints-for-inferencing).
411
506
412
-
### Consume Llama 2 models deployed to real-time endpoints
507
+
---
508
+
509
+
### Consume Llama models deployed to real-time endpoints
413
510
414
-
For reference about how to invoke Llama 2 models deployed to real-time endpoints, see the model's card in Azure Machine Learning studio [model catalog](concept-model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
511
+
For reference about how to invoke Llama 3 models deployed to real-time endpoints, see the model's card in Azure Machine Learning studio [model catalog](concept-model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
415
512
416
513
## Cost and quotas
417
514
418
-
### Cost and quota considerations for Llama 2 models deployed as a service
515
+
### Cost and quota considerations for Llama models deployed as a service
419
516
420
517
Llama models deployed as a service are offered by Meta through Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find Azure Marketplace pricing when deploying or fine-tuning models.
421
518
@@ -427,7 +524,7 @@ For more information on how to track costs, see [Monitor costs for models offere
427
524
428
525
Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
429
526
430
-
### Cost and quota considerations for Llama 2 models deployed as real-time endpoints
527
+
### Cost and quota considerations for Llama models deployed as real-time endpoints
431
528
432
529
For deployment and inferencing of Llama models with real-time endpoints, you consume virtual machine (VM) core quota that is assigned to your subscription on a per-region basis. When you sign up for Azure Machine Learning studio, you receive a default VM quota for several VM families available in the region. You can continue to create deployments until you reach your quota limit. Once you reach this limit, you can request a quota increase.
0 commit comments