Skip to content

Commit 51874f6

Browse files
committed
modify according to feedback
1 parent f031016 commit 51874f6

File tree

6 files changed

+25
-27
lines changed

6 files changed

+25
-27
lines changed

articles/ai-services/openai/how-to/fine-tuning-deploy.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ The **Deploy model** dialog box opens. In the dialog box, enter your **Deploymen
3232

3333
You can monitor the progress of your deployment on the **Deployments** pane in Azure AI Foundry portal.
3434

35-
The UI does not support corss region deployment, while Python SDK or REST supports.
35+
The UI does not support cross region deployment, while Python SDK or REST supports.
3636

3737
## [Python](#tab/python)
3838

@@ -42,13 +42,13 @@ import json
4242
import os
4343
import requests
4444

45-
token= os.getenv("<TOKEN>")
45+
token = os.getenv("<TOKEN>")
4646
subscription = "<YOUR_SUBSCRIPTION_ID>"
4747
resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
4848
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
49-
model_deployment_name ="gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
49+
model_deployment_name = "gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
5050

51-
deploy_params = {'api-version': "2023-05-01"}
51+
deploy_params = {'api-version': "2024-10-21"}
5252
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
5353

5454
deploy_data = {
@@ -110,9 +110,9 @@ source_resource = "<SOURCE_RESOURCE>"
110110

111111
source = f'/subscriptions/{source_subscription}/resourceGroups/{source_resource_group}/providers/Microsoft.CognitiveServices/accounts/{source_resource}'
112112

113-
model_deployment_name ="gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
113+
model_deployment_name = "gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
114114

115-
deploy_params = {'api-version': "2023-05-01"}
115+
deploy_params = {'api-version': "2024-10-21"}
116116
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
117117

118118

@@ -198,7 +198,7 @@ The following example shows how to use the REST API to create a model deployment
198198

199199

200200
```bash
201-
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2023-05-01" \
201+
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
202202
-H "Authorization: Bearer <TOKEN>" \
203203
-H "Content-Type: application/json" \
204204
-d '{
@@ -232,7 +232,7 @@ The only limitations are that the new region must also support fine-tuning and w
232232
Below is an example of deploying a model that was fine-tuned in one subscription/region to another.
233233

234234
```bash
235-
curl -X PUT "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2023-05-01" \
235+
curl -X PUT "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
236236
-H "Authorization: Bearer <TOKEN>" \
237237
-H "Content-Type: application/json" \
238238
-d '{
@@ -339,7 +339,7 @@ print(response.choices[0].message.content)
339339
## [REST](#tab/rest)
340340

341341
```bash
342-
curl $AZURE_OPENAI_ENDPOINT/openai/deployments/<deployment_name>/chat/completions?api-version=2023-05-15 \
342+
curl $AZURE_OPENAI_ENDPOINT/openai/deployments/<deployment_name>/chat/completions?api-version=2024-10-21 \
343343
-H "Content-Type: application/json" \
344344
-H "api-key: $AZURE_OPENAI_API_KEY" \
345345
-d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure AI services support this too?"}]}'
@@ -457,7 +457,7 @@ To delete a deployment, use the [Deployments - Delete REST API](/rest/api/aiserv
457457
Below is the REST API example to delete a deployment:
458458

459459
```bash
460-
curl -X DELETE "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-01" \
460+
curl -X DELETE "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
461461
-H "Authorization: Bearer <TOKEN>"
462462
```
463463

articles/ai-services/openai/how-to/fine-tuning-troubleshoot.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,6 @@ If you set the detail parameter for an image to low, the image is resized
6464

6565
```json
6666
{
67-
6867
"type": "image_url",
6968

7069
"image_url": {
@@ -74,7 +73,6 @@ If you set the detail parameter for an image to low, the image is resized
7473
"detail": "low"
7574

7675
}
77-
7876
}
7977
```
8078

articles/ai-services/openai/includes/fine-tuning-openai-in-ai-studio.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Your training data and validation data sets consist of input and output examples
6464

6565
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo-0613` the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
6666

67-
It's generally recommened to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
67+
It's generally recommended to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
6868

6969
### Example file format
7070

@@ -184,7 +184,7 @@ Your job might be queued behind other jobs on the system. Training your model ca
184184

185185
## Checkpoints
186186

187-
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.
187+
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they may provide snapshots prior to overfitting. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.
188188

189189
:::image type="content" source="../media/fine-tuning/checkpoints.png" alt-text="Screenshot of checkpoints UI." lightbox="../media/fine-tuning/checkpoints.png":::
190190

articles/ai-services/openai/includes/fine-tuning-python.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ from openai import AzureOpenAI
125125
client = AzureOpenAI(
126126
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
127127
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
128-
api_version="2024-05-01-preview" # This API version or later is required to access seed/events/checkpoint capabilities
128+
api_version="2024-10-21" # This API version or later is required to access seed/events/checkpoint capabilities
129129
)
130130

131131
training_file_name = 'training_set.jsonl'
@@ -264,7 +264,7 @@ print(response.model_dump_json(indent=2))
264264

265265
## Checkpoints
266266

267-
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy. The final epoch will be represented by your fine-tuned model, the previous two epochs will be available as checkpoints.
267+
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they may provide snapshots prior to overfitting. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy. The final epoch will be represented by your fine-tuned model, the previous two epochs will be available as checkpoints.
268268

269269
You can run the list checkpoints command to retrieve the list of checkpoints associated with an individual fine-tuning job. You might need to upgrade your OpenAI client library to the latest version with `pip install openai --upgrade` to run this command.
270270

@@ -340,7 +340,7 @@ resource_group = "<YOUR_RESOURCE_GROUP_NAME>"
340340
resource_name = "<YOUR_AZURE_OPENAI_RESOURCE_NAME>"
341341
model_deployment_name ="gpt-35-turbo-ft" # custom deployment name that you will use to reference the model when making inference calls.
342342

343-
deploy_params = {'api-version': "2023-05-01"}
343+
deploy_params = {'api-version': "2024-10-21"}
344344
deploy_headers = {'Authorization': 'Bearer {}'.format(token), 'Content-Type': 'application/json'}
345345

346346
deploy_data = {

articles/ai-services/openai/includes/fine-tuning-rest.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ After you uploaded your training and validation files, you're ready to start the
137137
In this example we are also passing the seed parameter. The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but can differ in rare cases. If a seed is not specified, one will be generated for you.
138138

139139
```bash
140-
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2024-05-01-preview \
140+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs?api-version=2024-10-21 \
141141
-H "Content-Type: application/json" \
142142
-H "api-key: $AZURE_OPENAI_API_KEY" \
143143
-d '{
@@ -164,7 +164,7 @@ The current supported hyperparameters for fine-tuning are:
164164
After you start a fine-tune job, it can take some time to complete. Your job might be queued behind other jobs in the system. Training your model can take minutes or hours depending on the model and dataset size. The following example uses the REST API to check the status of your fine-tuning job. The example retrieves information about your job by using the job ID returned from the previous example:
165165

166166
```bash
167-
curl -X GET $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/<YOUR-JOB-ID>?api-version=2024-05-01-preview \
167+
curl -X GET $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/<YOUR-JOB-ID>?api-version=2024-10-21 \
168168
-H "api-key: $AZURE_OPENAI_API_KEY"
169169
```
170170

@@ -173,19 +173,19 @@ curl -X GET $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/<YOUR-JOB-ID>?api-ver
173173
To examine the individual fine-tuning events that were generated during training:
174174

175175
```bash
176-
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}/events?api-version=2024-05-01-preview \
176+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}/events?api-version=2024-10-21 \
177177
-H "Content-Type: application/json" \
178178
-H "api-key: $AZURE_OPENAI_API_KEY"
179179
```
180180

181181
## Checkpoints
182182

183-
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy. The final epoch will be represented by your fine-tuned model, the previous two epochs will be available as checkpoints.
183+
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they may provide snapshots prior to overfitting. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy. The final epoch will be represented by your fine-tuned model, the previous two epochs will be available as checkpoints.
184184

185185
You can run the list checkpoints command to retrieve the list of checkpoints associated with an individual fine-tuning job:
186186

187187
```bash
188-
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints?api-version=2024-05-01-preview \
188+
curl -X POST $AZURE_OPENAI_ENDPOINT/openai/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints?api-version=2024-10-21 \
189189
-H "Content-Type: application/json" \
190190
-H "api-key: $AZURE_OPENAI_API_KEY"
191191
```
@@ -240,7 +240,7 @@ The following example shows how to use the REST API to create a model deployment
240240
| fine_tuned_model | Retrieve this value from your fine-tuning job results in the previous step. It will look like `gpt-35-turbo-0613.ft-b044a9d3cf9c4228b5d393567f693b83`. You'll need to add that value to the deploy_data json. Alternatively you can also deploy a checkpoint, by passing the checkpoint ID which will appear in the format `ftchkpt-e559c011ecc04fc68eaa339d8227d02d` |
241241

242242
```bash
243-
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2023-05-01" \
243+
curl -X POST "https://management.azure.com/subscriptions/<SUBSCRIPTION>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.CognitiveServices/accounts/<RESOURCE_NAME>/deployments/<MODEL_DEPLOYMENT_NAME>api-version=2024-10-21" \
244244
-H "Authorization: Bearer <TOKEN>" \
245245
-H "Content-Type: application/json" \
246246
-d '{

articles/ai-services/openai/includes/fine-tuning-studio.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Your training data and validation data sets consist of input and output examples
5959

6060
The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `gpt-35-turbo` (all versions), `gpt-4`, `gpt-4o`, and `gpt-4o-mini`, the fine-tuning dataset must be formatted in the conversational format that is used by the [Chat completions](../how-to/chatgpt.md) API.
6161

62-
It's generally recommened to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
62+
It's generally recommended to use the instructions and prompts that you found worked best in every training example. This will help you get the best results, especially if you have fewer than a hundred examples.
6363

6464

6565
### Example file format
@@ -97,7 +97,7 @@ In addition to the JSONL format, training and validation data files must be enco
9797

9898
### Datasets size consideration
9999

100-
The more training examples you have, the better. Fine tuning jobs will not proceed without at least 10 training examples, but such a small number isn't enough to noticeably influence model responses. It is best practice to provide hundreds, if not thousands, of training examples to be successful. It's recommened to start with 50 weel-crafted training data.
100+
The more training examples you have, the better. Fine-tuning jobs will not proceed without at least 10 training examples, but such a small number isn't enough to noticeably influence model responses. It is best practice to provide hundreds, if not thousands, of training examples to be successful. It's recommended to start with 50 well-crafted training data.
101101

102102
In general, doubling the dataset size can lead to a linear increase in model quality. But keep in mind, low quality examples can negatively impact performance. If you train the model on a large amount of internal data, without first pruning the dataset for only the highest quality examples you could end up with a model that performs much worse than expected.
103103

@@ -184,13 +184,13 @@ Review your choices and select **Submit** to start training your new fine-tuned
184184

185185
## Check the status of your custom model
186186

187-
After you submit your fine-tuning job, you see a page with details about your fine-tuned model. You can find the status and more information about your fine-tuned model on the **Fine-tuning** page in Azure AI Foundry portal.
187+
After you submit your fine-tuning job, you will see a page with details about your fine-tuned model. You can find the status and more information about your fine-tuned model on the **Fine-tuning** page in Azure AI Foundry portal.
188188

189189
Your job might be queued behind other jobs on the system. Training your model can take minutes or hours depending on the model and dataset size.
190190

191191
## Checkpoints
192192

193-
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they can provide a snapshot of your model prior to overfitting having occurred. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.
193+
When each training epoch completes a checkpoint is generated. A checkpoint is a fully functional version of a model which can both be deployed and used as the target model for subsequent fine-tuning jobs. Checkpoints can be particularly useful, as they may provide snapshots prior to overfitting. When a fine-tuning job completes you will have the three most recent versions of the model available to deploy.
194194

195195
## Analyze your custom model
196196

0 commit comments

Comments
 (0)