Skip to content

Commit c57a6dc

Browse files
Merge pull request #208267 from DenKenMSFT/UserStory1979626
Updated Python samples to fix minor bugs
2 parents 8af9bbe + e6cc7df commit c57a6dc

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

articles/cognitive-services/openai/how-to/fine-tuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ training_file_name = 'training.jsonl'
112112
validation_file_name = 'validation.jsonl'
113113

114114
sample_data = [{"prompt": "When I go to the store, I want an", "completion": "apple"},
115-
{"prompt": "When I go to work, I want a", "completion": "coffe"},
115+
{"prompt": "When I go to work, I want a", "completion": "coffee"},
116116
{"prompt": "When I go home, I want a", "completion": "soda"}]
117117

118118
print(f'Generating the training file: {training_file_name}')

articles/cognitive-services/openai/includes/python.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,9 @@ Go to your resource in the Azure portal. The **Endpoint and Keys** can be found
6464

6565
# Send a completion call to generate an answer
6666
print('Sending a test completion job')
67-
start_phrase = 'When I go to the store, I want a'
67+
start_phrase = 'Write a tagline for an ice cream shop. '
6868
response = openai.Completion.create(engine=deployment_id, prompt=start_phrase, max_tokens=10)
69-
text = response['choices'][0]['text'].split('\n')[0]
69+
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
7070
print(start_phrase+text)
7171
```
7272

@@ -83,14 +83,14 @@ Go to your resource in the Azure portal. The **Endpoint and Keys** can be found
8383

8484
```console
8585
Sending a test completion job
86-
"When I go to the store, I want a can of black beans"
86+
Write a tagline for an ice cream shop. The coldest ice cream in town!
8787
```
8888

8989
Run the code a few more times to see what other types of responses you get as the response won't always be the same.
9090

9191
### Understanding your results
9292

93-
Since our example of `When I go to the store, I want a` provides very little context, it's normal for the model to not always return expected results. We're also intentionally limiting the response up to the first newline `\n` character, so occasional truncated responses with only our prompt text may occur as the model's response in that instance was split over multiple lines. If you wish to see the larger responses, you can remove `.split('\n')[0]` from your code and adjust the max number of tokens.
93+
Since our example of `Write a tagline for an ice cream shop.` provides very little context, it's normal for the model to not always return expected results. You can adjust the maximum number of tokens if the response seems unexpected or truncated.
9494

9595
## Clean up resources
9696

0 commit comments

Comments
 (0)