You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/fine-tuning-deploy.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ deploy_data = {
56
56
"properties": {
57
57
"model": {
58
58
"format": "OpenAI",
59
-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
59
+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
60
60
"version": "1"
61
61
}
62
62
}
@@ -82,7 +82,7 @@ print(r.json())
82
82
| resource_group | The resource group name for your Azure OpenAI resource. |
83
83
| resource_name | The Azure OpenAI resource name. |
84
84
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
85
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
85
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
86
86
87
87
### Cross region deployment
88
88
@@ -122,7 +122,7 @@ deploy_data = {
122
122
"properties": {
123
123
"model": {
124
124
"format": "OpenAI",
125
-
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0613.ft-0ab3f80e4f2242929258fff45b56a9ce
125
+
"name": <"FINE_TUNED_MODEL_NAME">, # This value will look like gpt-35-turbo-0125.ft-0ab3f80e4f2242929258fff45b56a9ce
126
126
"version": "1",
127
127
"source": source
128
128
}
@@ -220,7 +220,7 @@ curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resource
220
220
| resource_group | The resource group name for your Azure OpenAI resource. |
221
221
| resource_name | The Azure OpenAI resource name. |
222
222
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
223
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
223
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/monitor-openai.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,12 @@ ms.service: azure-ai-openai
15
15
16
16
## Dashboards
17
17
18
-
Azure OpenAI provides out-of-box dashboards for each of your Azure OpenAI resources. To access the monitoring dashboards sign-in to [https://portal.azure.com](https://portal.azure.com) and select the overview pane for one of your Azure OpenAI resources.
18
+
Azure OpenAI provides out-of-box dashboards for each of your Azure OpenAI resources. There are two key dashboards to monitor your resource:
19
+
20
+
- The metrics dashboard in the AI Foundry Azure OpenAI resource view
21
+
- The dashboard in the overview pane within the Azure portal
22
+
23
+
To access the monitoring dashboards, sign in to the [Azure portal](https://portal.azure.com) and then select the overview pane for one of your Azure OpenAI resources. To see the AI Foundry metrics dashboard from the Azure portal, select the overview pane and **Go to Azure AI Foundry portal**. Under tools, select the metrics dashboard.
19
24
20
25
:::image type="content" source="../media/monitoring/dashboard.png" alt-text="Screenshot that shows out-of-box dashboards for an Azure OpenAI resource in the Azure portal." lightbox="../media/monitoring/dashboard.png" border="false":::
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tune-models.md
-4Lines changed: 0 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,11 +17,7 @@ manager: nitinme
17
17
18
18
| Model ID | Fine-tuning regions | Max request (tokens) | Training Data (up to) |
19
19
| --- | --- | :---: | :---: |
20
-
|`gpt-35-turbo` (0613) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 4,096 | Sep 2021 |
21
20
|`gpt-35-turbo` (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | Input: 16,385<br> Output: 4,096 | Sep 2021|
22
21
|`gpt-35-turbo` (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | 16,385 | Sep 2021 |
23
-
|`gpt-4` (0613) <sup>**1**</sup> | North Central US <br> Sweden Central | 8192 | Sep 2021 |
24
22
|`gpt-4o-mini` (2024-07-18) | North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
25
23
|`gpt-4o` (2024-08-06) | East US2 <br> North Central US <br> Sweden Central | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 64,536 | Oct 2023 |
26
-
27
-
**<sup>1</sup>** GPT-4 is currently in public preview.
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
39
-
40
36
Or you can fine tune a previously fine-tuned model, formatted as base-model.ft-{jobid}.
41
37
42
38
Consult the [models page](../concepts/models.md#fine-tuning-models) to check which regions currently support fine-tuning.
@@ -62,7 +58,7 @@ Take a moment to review the fine-tuning workflow for using Azure AI Foundry:
62
58
63
59
Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
64
60
65
-
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
61
+
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document and must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
66
62
67
63
It's generally recommended to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
68
64
@@ -117,7 +113,7 @@ To fine-tune an Azure OpenAI model in an existing Azure AI Foundry project, foll
117
113
118
114
1. Select a base model to fine-tune. Your choice influences both the performance and the cost of your model. In this example, we are choosing the `gpt-35-turbo` model. Then select **Confirm**.
119
115
120
-
1. For `gpt-35-turbo` we have different versions available for fine-tuning, so please choose which version you'd like to fine-tune. We will choose (0301).
116
+
1. For `gpt-35-turbo` we have different versions available for fine-tuning, so please choose which version you'd like to fine-tune. We will choose (0125).
121
117
122
118
1. We also recommend including the `suffix` parameter to make it easier to distinguish between different iterations of your fine-tuned model. `suffix` takes a string, and is set to identify the fine-tuned model. With the OpenAI Python API a string of up to 18 characters is supported that will be added to your fine-tuned model name.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-python.md
+8-12Lines changed: 8 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,15 +26,11 @@ ms.author: mbullwin
26
26
27
27
The following models support fine-tuning:
28
28
29
-
-`gpt-35-turbo` (0613)
30
29
-`gpt-35-turbo` (1106)
31
30
-`gpt-35-turbo` (0125)
32
-
-`gpt-4` (0613)**<sup>*</sup>**
33
31
-`gpt-4o` (2024-08-06)
34
32
-`gpt-4o-mini` (2024-07-18)
35
33
36
-
**<sup>*</sup>** Fine-tuning for this model is currently in public preview.
37
-
38
34
Or you can fine tune a previously fine-tuned model, formatted as `base-model.ft-{jobid}`.
39
35
40
36
@@ -57,9 +53,9 @@ Take a moment to review the fine-tuning workflow for using the Python SDK with A
57
53
58
54
Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
59
55
60
-
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
56
+
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document and must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
61
57
62
-
If you would like a step-by-step walk-through of fine-tuning a `gpt-35-turbo-0613` please refer to the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md)
58
+
If you would like a step-by-step walk-through of fine-tuning a `gpt-4o-mini-2024-07-18` please refer to the [Azure OpenAI fine-tuning tutorial](../tutorials/fine-tune.md)
63
59
64
60
### Example file format
65
61
@@ -196,7 +192,7 @@ In this example we are also passing the seed parameter. The seed controls the re
196
192
response = client.fine_tuning.jobs.create(
197
193
training_file=training_file_id,
198
194
validation_file=validation_file_id,
199
-
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
195
+
model="gpt-35-turbo-0125", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
200
196
seed=105# seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
201
197
)
202
198
@@ -235,7 +231,7 @@ client = AzureOpenAI(
235
231
236
232
client.fine_tuning.jobs.create(
237
233
training_file="file-abc123",
238
-
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
234
+
model="gpt-35-turbo-0125", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
239
235
hyperparameters={
240
236
"n_epochs":2
241
237
}
@@ -327,7 +323,7 @@ Unlike the previous SDK commands, deployment must be done using the control plan
327
323
| resource_group | The resource group name for your Azure OpenAI resource. |
328
324
| resource_name | The Azure OpenAI resource name. |
329
325
| model_deployment_name | The custom name for your new fine-tuned model deployment. This is the name that will be referenced in your code when making chat completion calls. |
330
-
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
326
+
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83`. You will need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d`|
331
327
332
328
```python
333
329
import json
@@ -348,7 +344,7 @@ deploy_data = {
348
344
"properties": {
349
345
"model": {
350
346
"format": "OpenAI",
351
-
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83
347
+
"name": <"fine_tuned_model">, #retrieve this value from the previous call, it will look like gpt-35-turbo-0125.ft-b044a9d3cf9c4228b5d393567f693b83
352
348
"version": "1"
353
349
}
354
350
}
@@ -374,7 +370,7 @@ Learn more about cross region deployment and use the deployed model [here](../ho
374
370
375
371
Once you have created a fine-tuned model you might want to continue to refine the model over time through further fine-tuning. Continuous fine-tuning is the iterative process of selecting an already fine-tuned model as a base model and fine-tuning it further on new sets of training examples.
376
372
377
-
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
373
+
To perform fine-tuning on a model that you have previously fine-tuned you would use the same process as described in [create a customized model](#create-a-customized-model) but instead of specifying the name of a generic base model you would specify your already fine-tuned model's ID. The fine-tuned model ID looks like `gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7`
378
374
379
375
```python
380
376
from openai import AzureOpenAI
@@ -388,7 +384,7 @@ client = AzureOpenAI(
388
384
response = client.fine_tuning.jobs.create(
389
385
training_file=training_file_id,
390
386
validation_file=validation_file_id,
391
-
model="gpt-35-turbo-0613.ft-5fd1918ee65d4cd38a5dcf6835066ed7"# Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
387
+
model="gpt-35-turbo-0125.ft-5fd1918ee65d4cd38a5dcf6835066ed7"# Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
0 commit comments