You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/deploy-models-llama.md
+44-44Lines changed: 44 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: How to deploy Llama family of large language models with Azure AI Studio
2
+
title: How to deploy Meta Llama models with Azure AI Studio
3
3
titleSuffix: Azure AI Studio
4
-
description: Learn how to deploy Llama family of large language models with Azure AI Studio.
4
+
description: Learn how to deploy Meta Llama models with Azure AI Studio.
5
5
manager: scottpolly
6
6
ms.service: azure-ai-studio
7
7
ms.topic: how-to
@@ -12,35 +12,35 @@ author: msakande
12
12
ms.custom: [references_regions]
13
13
---
14
14
15
-
# How to deploy Llama family of large language models with Azure AI Studio
15
+
# How to deploy Meta Llama models with Azure AI Studio
16
16
17
17
[!INCLUDE [Azure AI Studio preview](../includes/preview-ai-studio.md)]
18
18
19
-
In this article, you learn about the Llama family of large language models (LLMs). You also learn how to use Azure AI Studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
19
+
In this article, you learn about the Meta Llama models. You also learn how to use Azure AI Studio to deploy models from this set either as a service with pay-as you go billing or with hosted infrastructure in real-time endpoints.
20
20
21
21
> [!IMPORTANT]
22
-
> Read more about the Llama 3 available now on Azure AI Model Catalog announcement from [Microsoft](https://aka.ms/Llama3Announcement) and from [Meta](https://aka.ms/meta-llama3-announcement-blog).
22
+
> Read more about the announcement of Meta Llama 3 models available now on Azure AI Model Catalog: [Microsoft Tech Community Blog](https://aka.ms/Llama3Announcement) and from [Meta Announcement Blog](https://aka.ms/meta-llama3-announcement-blog).
23
23
24
-
The Llama family of LLMs is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Llama-3-chat. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
24
+
Meta Llama 3 models and tools are a collection of pretrained and fine-tuned generative text models ranging in scale from 8 billion to 70 billion parameters. The model family also includes fine-tuned versions optimized for dialogue use cases with reinforcement learning from human feedback (RLHF), called Meta-Llama-3-8B-Instruct and Meta-Llama-3-70B-Instruct. See the following GitHub samples to explore integrations with [LangChain](https://aka.ms/meta-llama3-langchain-sample), [LiteLLM](https://aka.ms/meta-llama3-litellm-sample), [OpenAI](https://aka.ms/meta-llama3-openai-sample) and the [Azure API](https://aka.ms/meta-llama3-azure-api-sample).
25
25
26
-
## Deploy Llama models with pay-as-you-go
26
+
## Deploy Meta Llama models with pay-as-you-go
27
27
28
28
Certain models in the model catalog can be deployed as a service with pay-as-you-go, providing a way to consume them as an API without hosting them on your subscription, while keeping the enterprise security and compliance organizations need. This deployment option doesn't require quota from your subscription.
29
29
30
-
Llama 3 models deployed as a service with pay-as-you-go are offered by Meta AI through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
30
+
Meta Llama 3 models are deployed as a service with pay-as-you-go through Microsoft Azure Marketplace, and they might add more terms of use and pricing.
31
31
32
32
### Azure Marketplace model offerings
33
33
34
-
# [Llama 3](#tab/llama-three)
34
+
# [Meta Llama 3](#tab/llama-three)
35
35
36
36
The following models are available in Azure Marketplace for Llama 3 when deployed as a service with pay-as-you-go:
The following models are available in Azure Marketplace for Llama 3 when deployed as a service with pay-as-you-go:
46
46
@@ -53,15 +53,15 @@ The following models are available in Azure Marketplace for Llama 3 when deploye
53
53
54
54
---
55
55
56
-
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-llama-models-to-real-time-endpoints) instead.
56
+
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-meta-llama-models-to-real-time-endpoints) instead.
57
57
58
58
### Prerequisites
59
59
60
60
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
61
61
- An [Azure AI hub resource](../how-to/create-azure-ai-resource.md).
62
62
63
63
> [!IMPORTANT]
64
-
> For Llama family models, the pay-as-you-go model deployment offering is only available with AI hubs created in **East US 2** and **West US 3** regions.
64
+
> For Meta Llama models, the pay-as-you-go model deployment offering is only available with AI hubs created in **East US 2** and **West US 3** regions.
65
65
66
66
- An [Azure AI project](../how-to/create-projects.md) in Azure AI Studio.
67
67
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. Alternatively, your account can be assigned a custom role that has the following permissions:
@@ -86,7 +86,7 @@ If you need to deploy a different model, [deploy it to real-time endpoints](#dep
86
86
87
87
### Create a new deployment
88
88
89
-
# [Llama 3](#tab/llama-three)
89
+
# [Meta Llama 3](#tab/llama-three)
90
90
91
91
To create a deployment:
92
92
@@ -99,10 +99,10 @@ To create a deployment:
99
99
100
100
1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** region.
101
101
1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
102
-
1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Llama-3-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
102
+
1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-3-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
103
103
104
104
> [!NOTE]
105
-
> Subscribing a project to a particular Azure Marketplace offering (in this case, Llama-3-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
105
+
> Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-3-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
106
106
107
107
1. Once you sign up the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent deployments. If this scenario applies to you, select **Continue to deploy**.
108
108
@@ -116,9 +116,9 @@ To create a deployment:
116
116
117
117
1. You can always find the endpoint's details, URL, and access keys by navigating to the **Build** tab and selecting **Deployments** from the Components section.
118
118
119
-
To learn about billing for Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service).
119
+
To learn about billing for Meta Llama models deployed with pay-as-you-go, see [Cost and quota considerations for Llama 3 models deployed as a service](#cost-and-quota-considerations-for-llama-models-deployed-as-a-service).
120
120
121
-
# [Llama 2](#tab/llama-two)
121
+
# [Meta Llama 2](#tab/llama-two)
122
122
123
123
To create a deployment:
124
124
@@ -133,10 +133,10 @@ To create a deployment:
133
133
134
134
1. Select the project in which you want to deploy your models. To use the pay-as-you-go model deployment offering, your workspace must belong to the **East US 2** or **West US 3** region.
135
135
1. On the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
136
-
1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Llama-3-70b) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
136
+
1. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering (for example, Meta-Llama-2-70B) from Azure Marketplace. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**.
137
137
138
138
> [!NOTE]
139
-
> Subscribing a project to a particular Azure Marketplace offering (in this case, Llama-3-70b) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
139
+
> Subscribing a project to a particular Azure Marketplace offering (in this case, Meta-Llama-2-70B) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
140
140
141
141
:::image type="content" source="../media/deploy-monitor/llama/deploy-marketplace-terms.png" alt-text="A screenshot showing the terms and conditions of a given model." lightbox="../media/deploy-monitor/llama/deploy-marketplace-terms.png":::
142
142
@@ -157,9 +157,9 @@ To learn about billing for Llama models deployed with pay-as-you-go, see [Cost a
157
157
158
158
---
159
159
160
-
### Consume Llama models as a service
160
+
### Consume Meta Llama models as a service
161
161
162
-
# [Llama 3](#tab/llama-three)
162
+
# [Meta Llama 3](#tab/llama-three)
163
163
164
164
Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
165
165
@@ -173,12 +173,12 @@ Models deployed as a service can be consumed using either the chat or the comple
173
173
174
174
1. Make an API request based on the type of model you deployed.
175
175
176
-
- For completions models, such as `Llama-3-8b`, use the [`/v1/completions`](#completions-api) API.
177
-
- For chat models, such as `Llama-3-8b-chat`, use the [`/v1/chat/completions`](#chat-api) API.
176
+
- For completions models, such as `Meta-Llama-3-8B`, use the [`/v1/completions`](#completions-api) API.
177
+
- For chat models, such as `Meta-Llama-3-8B-Instruct`, use the [`/v1/chat/completions`](#chat-api) API.
178
178
179
-
For more information on using the APIs, see the [reference](#reference-for-llama-models-deployed-as-a-service) section.
179
+
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
180
180
181
-
# [Llama 2](#tab/llama-two)
181
+
# [Meta Llama 2](#tab/llama-two)
182
182
183
183
184
184
Models deployed as a service can be consumed using either the chat or the completions API, depending on the type of model you deployed.
@@ -193,14 +193,14 @@ Models deployed as a service can be consumed using either the chat or the comple
193
193
194
194
1. Make an API request based on the type of model you deployed.
195
195
196
-
- For completions models, such as `Llama-2-7b`, use the [`/v1/completions`](#completions-api) API.
197
-
- For chat models, such as `Llama-2-7b-chat`, use the [`/v1/chat/completions`](#chat-api) API.
196
+
- For completions models, such as `Meta-Llama-2-7B`, use the [`/v1/completions`](#completions-api) API.
197
+
- For chat models, such as `Meta-Llama-2-7B-Chat`, use the [`/v1/chat/completions`](#chat-api) API.
198
198
199
-
For more information on using the APIs, see the [reference](#reference-for-llama-models-deployed-as-a-service) section.
199
+
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
200
200
201
201
---
202
202
203
-
### Reference for Llama models deployed as a service
203
+
### Reference for Meta Llama models deployed as a service
204
204
205
205
#### Completions API
206
206
@@ -449,17 +449,17 @@ The following is an example response:
449
449
}
450
450
```
451
451
452
-
## Deploy Llama models to real-time endpoints
452
+
## Deploy Meta Llama models to real-time endpoints
453
453
454
-
Apart from deploying with the pay-as-you-go managed service, you can also deploy Llama models to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
454
+
Apart from deploying with the pay-as-you-go managed service, you can also deploy Meta Llama models to real-time endpoints in AI Studio. When deployed to real-time endpoints, you can select all the details about the infrastructure running the model, including the virtual machines to use and the number of instances to handle the load you're expecting. Models deployed to real-time endpoints consume quota from your subscription. All the models in the Llama family can be deployed to real-time endpoints.
455
455
456
456
Users can create a new deployment in [Azure Studio](#create-a-new-deployment-in-azure-studio) and in the [Python SDK.](#create-a-new-deployment-in-python-sdk)
457
457
458
458
### Create a new deployment in Azure Studio
459
459
460
-
# [Llama 3](#tab/llama-three)
460
+
# [Meta Llama 3](#tab/llama-three)
461
461
462
-
Follow these steps to deploy a model such as `Llama-3-8b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
462
+
Follow these steps to deploy a model such as `Meta-Llama-3-8B-Instruct` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
463
463
464
464
1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
465
465
@@ -470,7 +470,7 @@ Follow these steps to deploy a model such as `Llama-3-8b-chat` to a real-time en
470
470
1. On the **Deploy with Azure AI Content Safety (preview)** page, select **Skip Azure AI Content Safety** so that you can continue to deploy the model using the UI.
471
471
472
472
> [!TIP]
473
-
> In general, we recommend that you select **Enable Azure AI Content Safety (Recommended)** for deployment of the Llama model. This deployment option is currently only supported using the Python SDK and it happens in a notebook.
473
+
> In general, we recommend that you select **Enable Azure AI Content Safety (Recommended)** for deployment of the Meta Llama model. This deployment option is currently only supported using the Python SDK and it happens in a notebook.
474
474
475
475
1. Select **Proceed**.
476
476
1. Select the project where you want to create a deployment.
@@ -490,9 +490,9 @@ Follow these steps to deploy a model such as `Llama-3-8b-chat` to a real-time en
490
490
491
491
1. Select the **Consume** tab of the deployment to obtain code samples that can be used to consume the deployed model in your application.
492
492
493
-
# [Llama 2](#tab/llama-two)
493
+
# [Meta Llama 2](#tab/llama-two)
494
494
495
-
Follow these steps to deploy a model such as `Llama-3-7b-chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
495
+
Follow these steps to deploy a model such as `Meta-Llama-2-7B-Chat` to a real-time endpoint in [Azure AI Studio](https://ai.azure.com).
496
496
497
497
1. Choose the model you want to deploy from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
498
498
@@ -529,9 +529,9 @@ Follow these steps to deploy a model such as `Llama-3-7b-chat` to a real-time en
529
529
530
530
### Create a new deployment in Python SDK
531
531
532
-
# [Llama 3](#tab/llama-three)
532
+
# [Meta Llama 3](#tab/llama-three)
533
533
534
-
Follow these steps to deploy an open model such as `Llama-3-7b-chat` to a real-time endpoint, using the Azure AI Generative SDK.
534
+
Follow these steps to deploy an open model such as `Meta-Llama-3-7B-Instruct` to a real-time endpoint, using the Azure AI Generative SDK.
535
535
536
536
1. Import required libraries
537
537
@@ -573,9 +573,9 @@ Follow these steps to deploy an open model such as `Llama-3-7b-chat` to a real-t
573
573
client.deployments.create_or_update(deployment)
574
574
```
575
575
576
-
# [Llama 2](#tab/llama-two)
576
+
# [Meta Llama 2](#tab/llama-two)
577
577
578
-
Follow these steps to deploy an open model such as`Llama-2-7b-chat` to a real-time endpoint, using the Azure AI Generative SDK.
578
+
Follow these steps to deploy an open model such as`Meta-Llama-2-7B-Chat` to a real-time endpoint, using the Azure AI Generative SDK.
579
579
580
580
1. Import required libraries
581
581
@@ -620,7 +620,7 @@ Follow these steps to deploy an open model such as `Llama-2-7b-chat` to a real-t
620
620
621
621
---
622
622
623
-
### Consume Llama 3 models deployed to real-time endpoints
623
+
### Consume Meta Llama 3 models deployed to real-time endpoints
624
624
625
625
For reference about how to invoke Llama models deployed to real-time endpoints, see the model's card in the Azure AI Studio [model catalog](../how-to/model-catalog.md). Each model's card has an overview page that includes a description of the model, samples for code-based inferencing, fine-tuning, and model evaluation.
626
626
@@ -649,5 +649,5 @@ Models deployed as a service with pay-as-you-go are protected by Azure AI Conten
649
649
## Next steps
650
650
651
651
- [What is Azure AI Studio?](../what-is-ai-studio.md)
652
-
- [Fine-tune a Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
652
+
- [Fine-tune a Meta Llama 2 model in Azure AI Studio](fine-tune-model-llama.md)
0 commit comments