Skip to content

Commit bb1e378

Browse files
committed
update
1 parent a544c61 commit bb1e378

File tree

2 files changed

+153
-17
lines changed

2 files changed

+153
-17
lines changed

articles/ai-services/openai/includes/fine-tuning-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -241,7 +241,7 @@ print(response)
241241
response = client.fine_tuning.jobs.create(
242242
training_file=training_file_id,
243243
validation_file=validation_file_id,
244-
model="gpt-35-turbo", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
244+
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
245245
)
246246

247247
job_id = response.id

articles/ai-services/openai/tutorials/fine-tune.md

Lines changed: 152 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -47,12 +47,22 @@ In this tutorial you learn how to:
4747

4848
### Python libraries
4949

50+
# [OpenAI Python 0.28.1](#tab/python)
51+
5052
If you haven't already, you need to install the following libraries:
5153

5254
```cmd
5355
pip install "openai==0.28.1" json requests os tiktoken time
5456
```
5557

58+
# [OpenAI Python 1.x](#tab/python-new)
59+
60+
```cmd
61+
pip install openai json requests os tiktoken time
62+
```
63+
64+
---
65+
5666
[!INCLUDE [get-key-endpoint](../includes/get-key-endpoint.md)]
5767

5868
### Environment variables
@@ -273,6 +283,8 @@ p5 / p95: 11.6, 20.9
273283

274284
## Upload fine-tuning files
275285

286+
# [OpenAI Python 0.28.1](#tab/python)
287+
276288
```Python
277289
# Upload fine-tuning files
278290
import openai
@@ -302,6 +314,41 @@ print("Training file ID:", training_file_id)
302314
print("Validation file ID:", validation_file_id)
303315
```
304316

317+
# [OpenAI Python 1.x](#tab/python-new)
318+
319+
```python
320+
# Upload fine-tuning files
321+
322+
import os
323+
from openai import AzureOpenAI
324+
325+
client = AzureOpenAI(
326+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
327+
api_key=os.getenv("AZURE_OPENAI_KEY"),
328+
api_version="2023-10-01-preview" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
329+
)
330+
331+
training_file_name = 'training_set.jsonl'
332+
validation_file_name = 'validation_set.jsonl'
333+
334+
# Upload the training and validation dataset files to Azure OpenAI with the SDK.
335+
336+
training_response = client.files.create(
337+
file=open(training_file_name, "rb"), purpose="fine-tune"
338+
)
339+
training_file_id = training_response.id
340+
341+
validation_response = client.files.create(
342+
file=open(validation_file_name, "rb"), purpose="fine-tune"
343+
)
344+
validation_file_id = validation_response.id
345+
346+
print("Training file ID:", training_file_id)
347+
print("Validation file ID:", validation_file_id)
348+
```
349+
350+
---
351+
305352
**Output:**
306353

307354
```output
@@ -313,6 +360,8 @@ Validation file ID: file-70a3f525ed774e78a77994d7a1698c4b
313360

314361
Now that the fine-tuning files have been successfully uploaded you can submit your fine-tuning training job:
315362

363+
# [OpenAI Python 0.28.1](#tab/python)
364+
316365
```python
317366
response = openai.FineTuningJob.create(
318367
training_file=training_file_id,
@@ -330,6 +379,27 @@ print("Status:", response["status"])
330379
print(response)
331380
```
332381

382+
# [OpenAI Python 1.x](#tab/python-new)
383+
384+
```python
385+
response = client.fine_tuning.jobs.create(
386+
training_file=training_file_id,
387+
validation_file=validation_file_id,
388+
model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
389+
)
390+
391+
job_id = response.id
392+
393+
# You can use the job ID to monitor the status of the fine-tuning job.
394+
# The fine-tuning job will take some time to start and complete.
395+
396+
print("Job ID:", response.id)
397+
print("Status:", response.id)
398+
print(response.model_dump_json(indent=2))
399+
```
400+
401+
---
402+
333403
**Output:**
334404

335405
```output
@@ -350,26 +420,12 @@ Status: pending
350420
}
351421
```
352422

353-
To retrieve the training job ID, you can run:
354-
355-
```python
356-
response = openai.FineTuningJob.retrieve(job_id)
357-
358-
print("Job ID:", response["id"])
359-
print("Status:", response["status"])
360-
print(response)
361-
```
362-
363-
**Output:**
364-
365-
```output
366-
Fine-tuning model with job ID: ftjob-0f4191f0c59a4256b7a797a3d9eed219.
367-
```
368-
369423
## Track training job status
370424

371425
If you would like to poll the training job status until it's complete, you can run:
372426

427+
# [OpenAI Python 0.28.1](#tab/python)
428+
373429
```python
374430
# Track training status
375431

@@ -402,6 +458,42 @@ response = openai.FineTuningJob.list()
402458
print(f'Found {len(response["data"])} fine-tune jobs.')
403459
```
404460

461+
# [OpenAI Python 1.x](#tab/python-new)
462+
463+
```python
464+
# Track training status
465+
466+
from IPython.display import clear_output
467+
import time
468+
469+
start_time = time.time()
470+
471+
# Get the status of our fine-tuning job.
472+
response = client.fine_tuning.jobs.retrieve(job_id)
473+
474+
status = response.status
475+
476+
# If the job isn't done yet, poll it every 10 seconds.
477+
while status not in ["succeeded", "failed"]:
478+
time.sleep(10)
479+
480+
response = client.fine_tuning.jobs.retrieve(job_id)
481+
print(response.model_dump_json(indent=2))
482+
print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
483+
status = response.status
484+
print(f'Status: {status}')
485+
clear_output(wait=True)
486+
487+
print(f'Fine-tuning job {job_id} finished with status: {status}')
488+
489+
# List all fine-tuning jobs for this resource.
490+
print('Checking other fine-tune jobs for this resource.')
491+
response = client.files.list().data
492+
print(f'Found {len(response["data"])} fine-tune jobs.')
493+
```
494+
495+
---
496+
405497
**Output:**
406498

407499
```ouput
@@ -432,6 +524,8 @@ Found 2 fine-tune jobs.
432524

433525
To get the full results, run the following:
434526

527+
# [OpenAI Python 0.28.1](#tab/python)
528+
435529
```python
436530
#Retrieve fine_tuned_model name
437531

@@ -441,6 +535,19 @@ print(response)
441535
fine_tuned_model = response["fine_tuned_model"]
442536
```
443537

538+
# [OpenAI Python 1.x](#tab/python-new)
539+
540+
```python
541+
#Retrieve fine_tuned_model name
542+
543+
response = client.fine_tuning.jobs.retrieve(job_id)
544+
545+
print(response.model_dump_json(indent=2))
546+
fine_tuned_model = response.fine_tuned_model
547+
```
548+
549+
---
550+
444551
## Deploy fine-tuned model
445552

446553
Unlike the previous Python SDK commands in this tutorial, since the introduction of the quota feature, model deployment must be done using the [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update?tabs=HTTP), which requires separate authorization, a different API path, and a different API version.
@@ -504,6 +611,8 @@ It isn't uncommon for this process to take some time to complete when dealing wi
504611

505612
After your fine-tuned model is deployed, you can use it like any other deployed model in either the [Chat Playground of Azure OpenAI Studio](https://oai.azure.com), or via the chat completion API. For example, you can send a chat completion call to your deployed model, as shown in the following Python example. You can continue to use the same parameters with your customized model, such as temperature and max_tokens, as you can with other deployed models.
506613

614+
# [OpenAI Python 0.28.1](#tab/python)
615+
507616
```python
508617
#Note: The openai-python library support for Azure OpenAI is in preview.
509618
import os
@@ -527,6 +636,33 @@ print(response)
527636
print(response['choices'][0]['message']['content'])
528637
```
529638

639+
# [OpenAI Python 1.x](#tab/python-new)
640+
641+
```python
642+
import os
643+
from openai import AzureOpenAI
644+
645+
client = AzureOpenAI(
646+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
647+
api_key=os.getenv("AZURE_OPENAI_KEY"),
648+
api_version="2023-05-15"
649+
)
650+
651+
response = client.chat.completions.create(
652+
model="gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
653+
messages=[
654+
{"role": "system", "content": "You are a helpful assistant."},
655+
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
656+
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
657+
{"role": "user", "content": "Do other Azure AI services support this too?"}
658+
]
659+
)
660+
661+
print(response.choices[0].message.content)
662+
```
663+
664+
---
665+
530666
## Delete deployment
531667

532668
Unlike other types of Azure OpenAI models, fine-tuned/customized models have [an hourly hosting cost](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) associated with them once they are deployed. It is **strongly recommended** that once you're done with this tutorial and have tested a few chat completion calls against your fine-tuned model, that you **delete the model deployment**.

0 commit comments

Comments
 (0)