You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+1-17Lines changed: 1 addition & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -494,23 +494,7 @@ These models can only be used with Embedding API requests.
494
494
495
495
## Fine-tuning models
496
496
497
-
> [!NOTE]
498
-
> The supported regions might vary if you use Azure OpenAI models in an AI Studio project versus outside a project.
499
-
>
500
-
> `gpt-35-turbo` - fine-tuning of this model is limited to a subset of regions, and is not available in every region the base model is available.
501
-
502
-
| Model ID | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) |
503
-
| --- | --- | :---: | :---: |
504
-
|`babbage-002`| North Central US <br> Sweden Central <br> Switzerland West | 16,384 | Sep 2021 |
505
-
|`davinci-002`| North Central US <br> Sweden Central <br> Switzerland West | 16,384 | Sep 2021 |
506
-
|`gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 |
507
-
|`gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021|
508
-
|`gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
509
-
|`gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
510
-
|`gpt-4o-mini` <sup>**1**</sup> (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
511
-
|`gpt-4o` <sup>**1**</sup> (2024-08-06) | East US2 <br> North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
512
-
513
-
**<sup>1</sup>** GPT-4 is currently in public preview.
description: Describes the models that support fine-tuning and the regions where fine-tuning is available.
5
+
author: mrbullwinkle
6
+
ms.author: mbullwin
7
+
ms.service: azure-ai-openai
8
+
ms.topic: include
9
+
ms.date: 10/31/2024
10
+
manager: nitinme
11
+
---
12
+
13
+
> [!NOTE]
14
+
> `gpt-35-turbo` - Fine-tuning of this model is limited to a subset of regions, and isn't available in every region the base model is available.
15
+
>
16
+
> The supported regions for fine-tuning might vary if you use Azure OpenAI models in an AI Studio project versus outside a project.
17
+
18
+
| Model ID | Fine-tuning regions | Max request (tokens) | Training Data (up to) |
19
+
| --- | --- | :---: | :---: |
20
+
|`babbage-002`| North Central US <br> Sweden Central <br> Switzerland West | 16,384 | Sep 2021 |
21
+
|`davinci-002`| North Central US <br> Sweden Central <br> Switzerland West | 16,384 | Sep 2021 |
22
+
|`gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 |
23
+
|`gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021|
24
+
|`gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
25
+
|`gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
26
+
|`gpt-4o-mini` <sup>**1**</sup> (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
27
+
|`gpt-4o` <sup>**1**</sup> (2024-08-06) | East US2 <br> North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
28
+
29
+
**<sup>1</sup>** GPT-4 is currently in public preview.
Copy file name to clipboardExpand all lines: articles/ai-studio/concepts/fine-tuning-overview.md
+11-5Lines changed: 11 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ Fine-tuning retrains an existing large language model (LLM) by using example dat
21
21
22
22
This article can help you decide whether or not fine-tuning is the right solution for your use case. This article also describes how [Azure AI Studio](https://ai.azure.com) can support your fine-tuning needs.
23
23
24
-
In this article, fine-tuning refers to *supervised fine-tuning*, not continuous pretraining or reinforcement learning through human feedback (RLHF). Supervised fine-tuning is the process of retraining pretrained models on specific datasets. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn't well represented when you originally trained the base model.
24
+
In this article, fine-tuning refers to *supervised fine-tuning*, not to continuous pretraining or reinforcement learning through human feedback (RLHF). Supervised fine-tuning is the process of retraining pretrained models on specific datasets. The purpose is typically to improve model performance on specific tasks or to introduce information that wasn't well represented when you originally trained the base model.
25
25
26
26
## Getting starting with fine-tuning
27
27
@@ -80,7 +80,7 @@ You might not be ready for fine-tuning if:
80
80
81
81
Even with a great use case, fine-tuning is only as good as the quality of the data that you can provide. You need to be willing to invest the time and effort to make fine-tuning work. Different models require different data volumes, but you often need to be able to provide fairly large quantities of high-quality curated data.
82
82
83
-
Another important point is that even with high-quality data, if your data isn't in the necessary format for fine-tuning, you'll need to commit engineering resources for the formatting.
83
+
Another important point is that even with high-quality data, if your data isn't in the necessary format for fine-tuning, you need to commit engineering resources for the formatting.
84
84
85
85
You might be ready for fine-tuning if:
86
86
@@ -90,10 +90,10 @@ You might be ready for fine-tuning if:
90
90
91
91
You might not be ready for fine-tuning if:
92
92
93
-
-You haven't identified a dataset yet.
93
+
-An appropriate dataset hasn't been identified.
94
94
- The dataset format doesn't match the model that you want to fine-tune.
95
95
96
-
### How will you measure the quality of your fine-tuned model?
96
+
### How can you measure the quality of your fine-tuned model?
97
97
98
98
There isn't a single right answer to this question, but you should have clearly defined goals for what success with fine-tuning looks like. Ideally, this effort shouldn't just be qualitative. It should include quantitative measures of success, like using a holdout set of data for validation, in addition to user acceptance testing or A/B testing the fine-tuned model against a base model.
99
99
@@ -103,11 +103,17 @@ Now that you know when to use fine-tuning for your use case, you can go to Azure
103
103
104
104
| Model family | Model ID | Fine-tuning regions |
105
105
| --- | --- | --- |
106
-
|[Azure OpenAI models](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)| Azure OpenAI Service models that you can fine-tune include among others `gpt-4` and `gpt-4o-mini`.<br/><br/>For details about Azure OpenAI models that are available for fine-tuning, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models). | Azure OpenAI Service models that you can fine-tune include among others North Central US and Sweden Central.<br/><br/>The supported regions might vary if you use Azure OpenAI models in an AI Studio project versus outside a project.<br/><br/>For details about fine-tuning regions, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models). |
106
+
|[Azure OpenAI models](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)| Azure OpenAI Service models that you can fine-tune include among others `gpt-4` and `gpt-4o-mini`.<br/><br/>For details about Azure OpenAI models that are available for fine-tuning, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide. | Azure OpenAI Service models that you can fine-tune include among others North Central US and Sweden Central.<br/><br/>The supported regions might vary if you use Azure OpenAI models in an AI Studio project versus outside a project.<br/><br/>For details about fine-tuning regions, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models). |
107
107
|[Phi-3 family models](../how-to/fine-tune-phi-3.md)|`Phi-3-mini-4k-instruct`<br/>`Phi-3-mini-128k-instruct`<br/>`Phi-3-medium-4k-instruct`<br/>`Phi-3-medium-128k-instruct`| East US2 |
108
108
|[Meta Llama 2 family models](../how-to/fine-tune-model-llama.md)|`Meta-Llama-2-70b`<br/>`Meta-Llama-2-7b`<br/>`Meta-Llama-2-13b`| West US3 |
109
109
|[Meta Llama 3.1 family models](../how-to/fine-tune-model-llama.md)|`Meta-Llama-3.1-70b-Instruct`<br/>`Meta-Llama-3.1-8b-Instruct`| West US3 |
110
110
111
+
This table provides more details about the Azure OpenAI Service models that support fine-tuning and the regions where fine-tuning is available.
0 commit comments