Skip to content

Commit f94c860

Browse files
Merge pull request #3548 from voutilad/ft-remove-gpt4
Remove turbo-0613 and gpt4-0613.
2 parents 2c4f430 + 3e6b3df commit f94c860

File tree

6 files changed

+22
-42
lines changed

6 files changed

+22
-42
lines changed

articles/ai-services/openai/how-to/fine-tuning-deploy.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ deploy_data = {
5656
"properties": {
5757
"model": {
5858
"format": "OpenAI",
59-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
59+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
6060
"version": "1"
6161
}
6262
}
@@ -82,7 +82,7 @@ print(r.json())
8282
| resource_group | The resource group name for your Azure OpenAI resource. |
8383
| resource_name | The Azure OpenAI resource name. |
8484
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
85-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
85+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
8686

8787
### Cross region deployment
8888

@@ -122,7 +122,7 @@ deploy_data = {
122122
"properties": {
123123
"model": {
124124
"format": "OpenAI",
125-
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0613.ft-0ab3f80e4f2242929258fff45b56a9ce
125+
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0125.ft-0ab3f80e4f2242929258fff45b56a9ce
126126
"version": "1",
127127
"source": source
128128
}
@@ -220,7 +220,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
220220
| resource_group | The resource group name for your Azure OpenAI resource. |
221221
| resource_name | The Azure OpenAI resource name. |
222222
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
223-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
223+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
224224

225225

226226
### Cross region deployment

articles/ai-services/openai/includes/fine-tune-models.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,7 @@ manager: nitinme
1717
1818
| Model ID | Fine-tuning regions | Max request (tokens) | Training Data (up to) |
1919
| --- | --- | :---: | :---: |
20-
| `gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 |
2120
| `gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021|
2221
| `gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
23-
| `gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
2422
| `gpt-4o-mini` (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
2523
| `gpt-4o` (2024-08-06) | East US2 <br> North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
26-
27-
**<sup>1</sup>** GPT-4 is currently in public preview.

articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -28,15 +28,11 @@ ms.custom: include, build-2024
2828

2929
The following models support fine-tuning:
3030

31-
- `gpt-35-turbo` (0613)
3231
- `gpt-35-turbo` (1106)
3332
- `gpt-35-turbo` (0125)
34-
- `gpt-4` (0613)**<sup>*</sup>**
3533
- `gpt-4o` (2024-08-06)
3634
- `gpt-4o-mini` (2024-07-18)
3735

38-
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
39-
4036
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
4137

4238
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
@@ -62,7 +58,7 @@ Take a moment to review the fine-tuning workflow for using Azure AI Foundry:
6258

6359
Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
6460

65-
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
61+
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document and must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
6662

6763
It's generally recommended to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
6864

@@ -117,7 +113,7 @@ To fine-tune an Azure OpenAI model in an existing Azure AI Foundry project, foll
117113

118114
1. Select a base model to fine-tune. Your choice influences both the performance and the cost of your model. In this example, we are choosing the `gpt-35-turbo` model. Then select **Confirm**.
119115

120-
1. For `gpt-35-turbo` we have different versions available for fine-tuning, so please choose which version you'd like to fine-tune. We will choose (0301).
116+
1. For `gpt-35-turbo` we have different versions available for fine-tuning, so please choose which version you'd like to fine-tune. We will choose (0125).
121117

122118
1. We also recommend including the `suffix` parameter to make it easier to distinguish between different iterations of your fine-tuned model. `suffix` takes a string, and is set to identify the fine-tuned model. With the OpenAI Python API a string of up to 18 characters is supported that will be added to your fine-tuned model name.
123119

articles/ai-services/openai/includes/fine-tuning-python.md

Lines changed: 8 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -26,15 +26,11 @@ ms.author: mbullwin
2626

2727
The following models support fine-tuning:
2828

29-
- `gpt-35-turbo` (0613)
3029
- `gpt-35-turbo` (1106)
3130
- `gpt-35-turbo` (0125)
32-
- `gpt-4` (0613)**<sup>*</sup>**
3331
- `gpt-4o` (2024-08-06)
3432
- `gpt-4o-mini` (2024-07-18)
3533

36-
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
37-
3834
Or you can fine tune a previously fine-tuned model, formatted as `base-model.ft-{jobid}`.
3935

4036

@@ -57,9 +53,9 @@ Take a moment to review the fine-tuning workflow for using the Python SDK with A
5753

5854
Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
5955

60-
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
56+
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document and must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
6157

62-
If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-0613` please refer to the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md)
58+
If you would like a step-by-step walk-through of fine-tuning a `gpt-4o-mini-2024-07-18` please refer to the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md)
6359

6460
### Example file format
6561

@@ -196,7 +192,7 @@ In this example we are also passing the seed parameter. The seed controls the re
196192
response = client.fine_tuning.jobs.create(
197193
training_file=training_file_id,
198194
validation_file=validation_file_id,
199-
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
195+
model="gpt-35-turbo-0125", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
200196
seed = 105 # seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
201197
)
202198

@@ -235,7 +231,7 @@ client = AzureOpenAI(
235231

236232
client.fine_tuning.jobs.create(
237233
training_file="file-abc123",
238-
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
234+
model="gpt-35-turbo-0125", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
239235
hyperparameters={
240236
"n_epochs":2
241237
}
@@ -327,7 +323,7 @@ Unlike the previous SDK commands, deployment must be done using the control plan
327323
| resource_group | The resource group name for your Azure OpenAI resource. |
328324
| resource_name | The Azure OpenAI resource name. |
329325
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
330-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
326+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
331327

332328
```python
333329
import json
@@ -348,7 +344,7 @@ deploy_data = {
348344
"properties": {
349345
"model": {
350346
"format": "OpenAI",
351-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
347+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
352348
"version": "1"
353349
}
354350
}
@@ -374,7 +370,7 @@ Learn more about cross region deployment and use the deployed model [here](../ho
374370

375371
Once you have created a fine-tuned model you might want to continue to refine the model over time through further fine-tuning. Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and fine-tuning it further on new sets of training examples.
376372

377-
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
373+
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
378374

379375
```python
380376
from openai import AzureOpenAI
@@ -388,7 +384,7 @@ client = AzureOpenAI(
388384
response = client.fine_tuning.jobs.create(
389385
training_file=training_file_id,
390386
validation_file=validation_file_id,
391-
model="gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7" # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
387+
model="gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7" # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
392388
)
393389

394390
job_id = response.id

articles/ai-services/openai/includes/fine-tuning-rest.md

Lines changed: 6 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -25,15 +25,11 @@ ms.author: mbullwin
2525

2626
The following models support fine-tuning:
2727

28-
- `gpt-35-turbo` (0613)
2928
- `gpt-35-turbo` (1106)
3029
- `gpt-35-turbo` (0125)
31-
- `gpt-4` (0613)**<sup>*</sup>**
3230
- `gpt-4o` (2024-08-06)
3331
- `gpt-4o-mini` (2024-07-18)
3432

35-
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
36-
3733
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
3834

3935
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
@@ -57,9 +53,9 @@ Take a moment to review the fine-tuning workflow for using the REST APIS and Pyt
5753

5854
Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
5955

60-
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` and other related models, the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
56+
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document and must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
6157

62-
If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-0613` please refer to the [Azure OpenAI fine-tuning tutorial.](../tutorials/fine-tune.md)
58+
If you would like a step-by-step walk-through of fine-tuning a `gpt-4o-mini-2024-07-18` please refer to the [Azure OpenAI fine-tuning tutorial.](../tutorials/fine-tune.md)
6359

6460
### Example file format
6561

@@ -141,7 +137,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2024-10-
141137
-H "Content-Type: application/json" \
142138
-H "api-key: $AZURE_OPENAI_API_KEY" \
143139
-d '{
144-
"model": "gpt-35-turbo-0613",
140+
"model": "gpt-35-turbo-0125",
145141
"training_file": "<TRAINING_FILE_ID>",
146142
"validation_file": "<VALIDATION_FILE_ID>",
147143
"seed": 105
@@ -237,7 +233,7 @@ The following example shows how to use the REST API to create a model deployment
237233
| resource_group | The resource group name for your Azure OpenAI resource. |
238234
| resource_name | The Azure OpenAI resource name. |
239235
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
240-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
236+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
241237

242238
```bash
243239
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
@@ -262,14 +258,14 @@ Learn more about cross region deployment and use the deployed model [here](../ho
262258

263259
Once you have created a fine-tuned model, you might want to continue to refine the model over time through further fine-tuning. Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and fine-tuning it further on new sets of training examples.
264260

265-
To perform fine-tuning on a model that you have previously fine-tuned, you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
261+
To perform fine-tuning on a model that you have previously fine-tuned, you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
266262

267263
```bash
268264
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2023-12-01-preview \
269265
-H "Content-Type: application/json" \
270266
-H "api-key: $AZURE_OPENAI_API_KEY" \
271267
-d '{
272-
"model": "gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7",
268+
"model": "gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7",
273269
"training_file": "<TRAINING_FILE_ID>",
274270
"validation_file": "<VALIDATION_FILE_ID>",
275271
"suffix": "<additional text used to help identify fine-tuned models>"

0 commit comments

Comments
 (0)