Skip to content

Commit 3bded4a

Browse files
author
Jill Grant
authored
Merge pull request #963 from kvijaykannan/MaaPFTAIStudio
This is the 1st version of the MaaP FT on AI Studio doc.
2 parents 996d6b3 + 31c4fbc commit 3bded4a

File tree

6 files changed

+97
-4
lines changed

6 files changed

+97
-4
lines changed

articles/ai-studio/concepts/fine-tuning-overview.md

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Consider fine-tuning GenAI models to:
2727
- Save time and resources with faster and more precise results
2828
- Get more relevant and context-aware outcomes as models are fine-tuned for specific use cases
2929

30-
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and clicking into the model card to learn detailed information about each model. Specific models may be subjected to regional constraints, [view this list for more details](#supported-models-for-fine-tuning).
30+
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and selecting the model card to learn detailed information about each model. Specific models may be subjected to regional constraints, [view this list for more details](#supported-models-for-fine-tuning).
3131

3232
:::image type="content" source="../media/concepts/model-catalog-fine-tuning.png" alt-text="Screenshot of Azure AI Foundry model catalog and filtering by Fine-tuning tasks." lightbox="../media/concepts/model-catalog-fine-tuning.png":::
3333

@@ -78,7 +78,12 @@ It's important to call out that fine-tuning is heavily dependent on the quality
7878

7979
## Supported models for fine-tuning
8080

81-
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune. Fine-tuning is available in specific Azure regions for some models. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in serverless API endpoints | Azure AI Foundry](../how-to/deploy-models-serverless-availability.md) for detailed information.
81+
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
82+
For some models in the model catalog, fine-tuning is available by using a serverless API, or a managed compute (preview), or both.
83+
84+
Fine-tuning is available in specific Azure regions for some models that are deployed via serverless APIs. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in serverless API endpoints](../how-to/deploy-models-serverless-availability.md) for detailed information.
85+
86+
For more information on fine-tuning using a managed compute (preview), see [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md).
8287

8388
For details about Azure OpenAI models that are available for fine-tuning, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide.
8489

@@ -91,7 +96,9 @@ For the Azure OpenAI Service models that you can fine tune, supported regions f
9196

9297
## Related content
9398

99+
- [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md)
94100
- [Fine-tune an Azure OpenAI model in Azure AI Foundry portal](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
95101
- [Fine-tune a Llama 2 model in Azure AI Foundry portal](../how-to/fine-tune-model-llama.md)
96102
- [Fine-tune a Phi-3 model in Azure AI Foundry portal](../how-to/fine-tune-phi-3.md)
97103
- [Deploy Phi-3 family of small language models with Azure AI Foundry](../how-to/deploy-models-phi-3.md)
104+
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
---
2+
title: Fine-tune models using a managed compute with Azure AI Foundry portal (preview)
3+
titleSuffix: Azure AI Foundry
4+
description: Learn how to fine-tune models using a managed compute with Azure AI Foundry.
5+
manager: scottpolly
6+
ms.service: azure-ai-studio
7+
ms.topic: how-to
8+
ms.date: 11/22/2024
9+
ms.reviewer: vkann
10+
reviewer: kvijaykannan
11+
ms.author: mopeakande
12+
author: msakande
13+
ms.custom: references_regions
14+
15+
#customer intent: As a data scientist using a managed compute, I want to learn how to fine-tune models to improve model performance for specific tasks.
16+
---
17+
18+
# Fine-tune models using managed compute (preview)
19+
20+
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
21+
22+
This article explains how to use a managed compute to fine-tune a foundation model in the Azure AI Foundry portal. Fine-tuning involves adapting a pretrained model to a new, related task or domain. When you use a managed compute for fine-tuning, you use your computational resources to adjust training parameters such as learning rate, batch size, and number of training epochs to optimize the model's performance for a specific task.
23+
24+
Fine-tuning a pretrained model to use for a related task is more efficient than building a new model, as fine-tuning builds upon the pretrained model's existing knowledge and reduces the time and data needed for training.
25+
26+
To improve model performance, you might consider fine-tuning a foundation model with your training data. You can easily fine-tune foundation models by using the fine-tune settings in the Azure AI Foundry portal.
27+
28+
29+
## Prerequisites
30+
31+
- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
32+
33+
- An [Azure AI Foundry project](create-projects.md).
34+
35+
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Foundry portal. To perform the steps in this article, your user account must be assigned the __owner__ or __contributor__ role for the Azure subscription. For more information on permissions, see [Role-based access control in Azure AI Foundry portal](../concepts/rbac-ai-studio.md).
36+
37+
## Fine-tune a foundation model using managed compute
38+
39+
1. Sign in to [Azure AI Foundry](https://ai.azure.com).
40+
41+
1. If you're not already in your project, select it.
42+
1. Select **Fine-tuning** from the left navigation pane.
43+
44+
1. Select **Fine-tune model** and add the model that you want to fine-tune. This article uses _Phi-3-mini-4k-instruct_ for illustration.
45+
1. Select **Next** to see the available fine-tune options. Some foundation models support only the __Managed compute__ option.
46+
47+
1. Alternatively, you could select **Model catalog** from the left sidebar of your project and find the model card of the foundation model that you want to fine-tune.
48+
49+
1. Select __Fine-tune__ on the model card to see the available fine-tune options. Some foundation models support only the __Managed compute__ option.
50+
51+
:::image type="content" source="../media/how-to/fine-tune-managed-compute/fine-tune-options.png" alt-text="Screenshot showing fine-tuning options for a foundation model in Azure AI Foundry." lightbox="../media/how-to/fine-tune-managed-compute/fine-tune-options.png":::
52+
53+
1. Select __Managed compute__ to use your personal compute resources. This action opens up the "Basic settings" page of a window for specifying the fine-tuning settings.
54+
55+
### Configure fine-tune settings
56+
57+
In this section, you go through the steps to configure fine-tuning for your model, using a managed compute.
58+
59+
1. Provide a name for the fine-tuned model on the "Basic settings" page, and select **Next** to go to the "Compute" page.
60+
61+
1. Select the Azure Machine Learning compute cluster to use for fine-tuning the model. Fine-tuning runs on GPU compute. Ensure that you have sufficient compute quota for the compute SKUs you plan to use.
62+
63+
:::image type="content" source="../media/how-to/fine-tune-managed-compute/fine-tune-compute.png" alt-text="Screenshot showing settings for the compute to use for fine-tuning." lightbox="../media/how-to/fine-tune-managed-compute/fine-tune-compute.png":::
64+
65+
1. Select **Next** to go to the "Training data" page. On this page, the "Task type" is preselected as **Chat completion**.
66+
67+
1. Provide the training data to use to fine-tune your model. You can choose to either upload a local file (in JSONL, CSV or TSV format) or select an existing registered dataset from your project.
68+
69+
1. Select **Next** to go to the "Validation data" page. Keep the **Automatic split of training data** selection to reserve an automatic split of training data for validation. Alternatively, you could provide a different validation dataset by uploading a local file (in JSONL, CSV or TSV format) or selecting an existing registered dataset from your project.
70+
71+
1. Select **Next** to go to the "Task parameters" page. Tuning hyperparameters is essential for optimizing large language models (LLMs) in real-world applications. It allows for improved performance and efficient resource usage. You can choose to keep the default settings or customize parameters like epochs or learning rate.
72+
73+
1. Select **Next** to go to the "Review" page and check that all the settings look good.
74+
75+
1. Select **Submit** to submit your fine-tuning job. Once the job completes, you can view evaluation metrics for the fine-tuned model. You can then deploy this model to an endpoint for inferencing.
76+
77+
## Related Contents
78+
79+
- [Fine-tune models with Azure AI Foundry](../concepts/fine-tuning-overview.md)
80+
- [How to use Phi-3 family chat models](deploy-models-phi-3.md)

articles/ai-studio/how-to/model-catalog-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ Models from Microsoft are billed via Azure meters as First Party Consumption Ser
149149

150150
### Fine-tuning models
151151

152-
Certain models support also serverless fine-tuning. For these models, you can take advantage of hosted fine-tuning with pay-as-you-go billing to tailor the models by using data that you provide. For more information, see the [fine-tuning overview](../concepts/fine-tuning-overview.md).
152+
Certain models also support fine-tuning. For these models, you can take advantage of managed compute (preview) or serverless API fine-tuning to tailor the models by using data that you provide. For more information, see the [fine-tuning overview](../concepts/fine-tuning-overview.md).
153153

154154
### RAG with models deployed as serverless APIs
155155

106 KB
Loading
48.7 KB
Loading

articles/ai-studio/toc.yml

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,13 @@ items:
9898
- name: How to use model benchmarking
9999
href: how-to/benchmark-model-in-catalog.md
100100
- name: Fine-tune models
101-
href: concepts/fine-tuning-overview.md
101+
items:
102+
- name: Fine-tuning overview
103+
href: concepts/fine-tuning-overview.md
104+
- name: Fine-tune with user-managed compute
105+
href: how-to/fine-tune-managed-compute.md
106+
- name: Fine-tune Azure OpenAI models
107+
href: ../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context
102108
- name: Distillation
103109
href: concepts/concept-model-distillation.md
104110
- name: Azure OpenAI models

0 commit comments

Comments
 (0)