Skip to content

Commit 0d89fdb

Browse files
Merge pull request #7190 from MicrosoftDocs/main
Auto Publish – main to live - 2025-09-22 17:08 UTC
2 parents 53c1b8c + 3372d94 commit 0d89fdb

File tree

8 files changed

+65
-22
lines changed

8 files changed

+65
-22
lines changed

articles/ai-foundry/concepts/resource-types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure AI Foundry
44
description: Learn about the supported Azure resource types in Azure AI Foundry portal.
55
reviewer: deeikele
66
ms.reviewer: deeikele
7-
author: sgilley
7+
author: sdgilley
88
ms.author: sgilley
99
ms.date: 07/22/2025
1010
ms.service: azure-ai-foundry

articles/ai-foundry/how-to/develop/sdk-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ ai-usage: ai-assisted
1010
ms.topic: how-to
1111
ms.date: 09/15/2025
1212
ms.reviewer: dantaylo
13-
ms.author: sgilley
14-
author: sdgilley
13+
ms.author: johalexander
14+
author: ms-johnalex
1515
zone_pivot_groups: foundry-sdk-overview-languages
1616
# customer intent: I want to learn how to use the Azure AI Foundry SDK to build AI applications on Azure.
1717
---

articles/ai-foundry/openai/includes/fine-tune-models.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.custom:
1818
> The supported regions for fine-tuning might vary if you use Azure OpenAI models in an Azure AI Foundry project versus outside a project.
1919
>
2020
21-
| Model ID | Standard training regions | Global training (preview) | Max request (tokens) | Training data (up to) | Modality |
21+
| Model ID | Standard training regions | Global training | Max request (tokens) | Training data (up to) | Modality |
2222
| --- | --- | :---: | :---: | :---: | --- |
2323
| `gpt-35-turbo` <br> (1106) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | Input: 16,385<br> Output: 4,096 | Sep 2021 | Text to text |
2424
| `gpt-35-turbo` <br> (0125) | East US2 <br> North Central US <br> Sweden Central <br> Switzerland West | - | 16,385 | Sep 2021 | Text to text |
@@ -30,7 +30,7 @@ ms.custom:
3030
| `o4-mini` <br> (2025-04-16) | East US2 <br> Sweden Central | - | Input: 128,000 <br> Output: 16,384 <br> Training example context length: 65,536 | May 2024 | Text to text |
3131

3232
> [!NOTE]
33-
> Global training (in preview) provides [more affordable](https://aka.ms/aoai-pricing) training per token, but doesn't offer [data residency](https://aka.ms/data-residency). It's currently available to Azure OpenAI resources in the following regions, with more regions coming soon:
33+
> Global training provides [more affordable](https://aka.ms/aoai-pricing) training per token, but doesn't offer [data residency](https://aka.ms/data-residency). It's currently available to Azure OpenAI resources in the following regions:
3434
>
3535
>- Australia East
3636
>- Brazil South

articles/ai-foundry/openai/includes/fine-tuning-python.md

Lines changed: 34 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -139,14 +139,13 @@ After you upload your training and validation files, you're ready to start the f
139139

140140
The following Python code shows an example of how to create a new fine-tune job with the Python SDK:
141141

142-
In this example we are also passing the seed parameter. The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you.
143-
144142
```python
145143
response = client.fine_tuning.jobs.create(
146144
training_file=training_file_id,
147145
validation_file=validation_file_id,
148-
model="gpt-4.1-2025-04-14", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
149-
seed = 105 # seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
146+
model="gpt-4.1-2025-04-14", # Enter base model name.
147+
suffix="my-model", # Custom suffix for naming the resulting model. Note that in Azure OpenAI the model cannot contain dot/period characters.
148+
seed=105, # seed parameter controls reproducibility of the fine-tuning job. If no seed is specified one will be generated automatically.
150149
)
151150

152151
job_id = response.id
@@ -159,9 +158,24 @@ print("Status:", response.id)
159158
print(response.model_dump_json(indent=2))
160159
```
161160

161+
If you are fine tuning a model that supports [Global Training](../concepts/models.md#fine-tuning-models), you can specify the training type by using the `extra_body` named argument:
162+
163+
```python
164+
response = client.fine_tuning.jobs.create(
165+
training_file=training_file_id,
166+
validation_file=validation_file_id,
167+
model="gpt-4.1-2025-04-14",
168+
suffix="my-model",
169+
seed=105,
170+
extra_body={ "trainingType": "globalstandard" }
171+
)
172+
173+
job_id = response.id
174+
```
175+
162176
You can also pass additional optional parameters like hyperparameters to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
163177

164-
The current supported hyperparameters for fine-tuning are:
178+
The current supported hyperparameters for Supervised Fine-Tuning are:
165179

166180
|**Name**| **Type**| **Description**|
167181
|---|---|---|
@@ -170,7 +184,7 @@ The current supported hyperparameters for fine-tuning are:
170184
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
171185
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
172186

173-
To set custom hyperparameters with the 1.x version of the OpenAI Python API:
187+
To set custom hyperparameters with the 1.x version of the OpenAI Python API, provide them as part of the `method`:
174188

175189
```python
176190
from openai import OpenAI
@@ -182,13 +196,24 @@ client = OpenAI(
182196

183197
client.fine_tuning.jobs.create(
184198
training_file="file-abc123",
185-
model="gpt-4.1-2025-04-14", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
186-
hyperparameters={
187-
"n_epochs":2
199+
model="gpt-4.1-2025-04-14",
200+
suffix="my-model",
201+
seed=105,
202+
method={
203+
"type": "supervised", # In this case, the job will be using Supervised Fine Tuning.
204+
"supervised": {
205+
"hyperparameters": {
206+
"n_epochs": 2
207+
}
208+
}
188209
}
189210
)
190211
```
191212

213+
> [!NOTE]
214+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
215+
216+
192217
## Check fine-tuning job status
193218

194219
```python

articles/ai-foundry/openai/includes/fine-tuning-rest.md

Lines changed: 22 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/v1/files \
119119

120120
## Create a customized model
121121

122-
After you uploaded your training and validation files, you're ready to start the fine-tuning job. The following code shows an example of how to [create a new fine-tuning job](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP&preserve-view=true) with the REST API.
122+
After you uploaded your training and validation files, you're ready to start the fine-tuning job. The following code shows an example of how to [create a new fine-tuning job](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2024-10-21&tabs=HTTP&preserve-view=true) with the REST API.
123123

124124
In this example we are also passing the seed parameter. The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but can differ in rare cases. If a seed is not specified, one will be generated for you.
125125

@@ -129,15 +129,30 @@ curl -X POST $AZURE_OPENAI_ENDPOINT/openai/v1/fine_tuning/jobs \
129129
-H "api-key: $AZURE_OPENAI_API_KEY" \
130130
-d '{
131131
"model": "gpt-4.1-2025-04-14",
132-
"training_file": "<TRAINING_FILE_ID>",
132+
"training_file": "<TRAINING_FILE_ID>",
133133
"validation_file": "<VALIDATION_FILE_ID>",
134134
"seed": 105
135135
}'
136136
```
137137

138-
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2023-12-01-preview&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
138+
If you are fine tuning a model that supports [Global Training](../concepts/models.md#fine-tuning-models), you can specify the training type by using the `extra_body` named argument and using api-version `2025-04-01-preview`:
139+
140+
```bash
141+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2025-04-01-preview \
142+
-H "Content-Type: application/json" \
143+
-H "api-key: $AZURE_OPENAI_API_KEY" \
144+
-d '{
145+
"model": "gpt-4.1-2025-04-14",
146+
"training_file": "<TRAINING_FILE_ID>",
147+
"validation_file": "<VALIDATION_FILE_ID>",
148+
"seed": 105,
149+
"trainingType": "globalstandard"
150+
}'
151+
```
152+
153+
You can also pass additional optional parameters like [hyperparameters](/rest/api/azureopenai/fine-tuning/create?view=rest-azureopenai-2024-10-21&tabs=HTTP#finetuninghyperparameters&preserve-view=true) to take greater control of the fine-tuning process. For initial training we recommend using the automatic defaults that are present without specifying these parameters.
139154

140-
The current supported hyperparameters for fine-tuning are:
155+
The current supported hyperparameters for Supervised Fine-Tuning are:
141156

142157
|**Name**| **Type**| **Description**|
143158
|---|---|---|
@@ -146,6 +161,9 @@ The current supported hyperparameters for fine-tuning are:
146161
|`n_epochs` | integer | The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset. |
147162
|`seed` | integer | The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases. If a seed isn't specified, one will be generated for you. |
148163

164+
> [!NOTE]
165+
> See the guides for [Direct Preference Optimization](../how-to/fine-tuning-direct-preference-optimization.md) and [Reinforcement Fine-Tuning](../how-to/reinforcement-fine-tuning.md) to learn more about their supported hyperparameters.
166+
149167
## Check the status of your customized model
150168

151169
After you start a fine-tune job, it can take some time to complete. Your job might be queued behind other jobs in the system. Training your model can take minutes or hours depending on the model and dataset size. The following example uses the REST API to check the status of your fine-tuning job. The example retrieves information about your job by using the job ID returned from the previous example:

articles/ai-foundry/openai/includes/models-azure-direct-openai.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ Azure OpenAI provides customers with choices on the hosting structure that fits
353353

354354
All deployments can perform the exact same inference operations, but the billing, scale, and performance are substantially different. To learn more about Azure OpenAI deployment types, see our [Deployment types guide](../how-to/deployment-types.md).
355355

356-
# [Global Standard](#tab/global-standard)
356+
# [Global Standard](#tab/global-standard-aoai)
357357

358358

359359
### Global Standard model availability
@@ -363,7 +363,7 @@ All deployments can perform the exact same inference operations, but the billing
363363
> [!NOTE]
364364
> `o3-deep-research` is currently only available with Azure AI Foundry Agent Service. To learn more, see the [Deep Research tool guidance](/azure/ai-foundry/agents/how-to/tools/deep-research).
365365
366-
# [Global Provisioned managed](#tab/global-ptum)
366+
# [Global Provisioned managed](#tab/global-ptum-aoai)
367367

368368
### Global Provisioned managed model availability
369369

articles/machine-learning/v1/how-to-save-write-experiment-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure Machine Learning
44
description: Learn where to save your input and output files to prevent storage limitation errors and experiment latency.
55
services: machine-learning
66
author: rastala
7-
ms.author: roastala
7+
ms.author: sooryar
88
manager: danielsc
99
ms.service: azure-machine-learning
1010
ms.subservice: core

articles/machine-learning/v1/how-to-use-private-python-packages.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure Machine Learning
44
description: Learn how to securely work with private Python packages from your Azure Machine Learning environments.
55
services: machine-learning
66
author: rastala
7-
ms.author: roastala
7+
ms.author: jturuk
88
ms.reviewer: laobri
99
ms.service: azure-machine-learning
1010
ms.subservice: core

0 commit comments

Comments
 (0)