Skip to content

Commit 1a801ec

Browse files
authored
Merge pull request #265408 from mrbullwinkle/mrb_02_06_2024_fine_tuning
[Azure OpenAI] [Release branch] Remove section
2 parents 395867a + e146ab0 commit 1a801ec

File tree

1 file changed

+1
-76
lines changed

1 file changed

+1
-76
lines changed

articles/ai-services/openai/how-to/fine-tuning.md

Lines changed: 1 addition & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ manager: nitinme
77
ms.service: azure-ai-openai
88
ms.custom: build-2023, build-2023-dataai, devx-track-python
99
ms.topic: how-to
10-
ms.date: 01/30/2024
10+
ms.date: 02/06/2024
1111
author: mrbullwinkle
1212
ms.author: mbullwin
1313
zone_pivot_groups: openai-fine-tuning
@@ -41,81 +41,6 @@ A fine-tuned model improves on the few-shot learning approach by training the mo
4141

4242
::: zone-end
4343

44-
## Function calling
45-
46-
> [!IMPORTANT]
47-
> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. However, the fine-tuning API currently requires use of the legacy parameters.
48-
49-
Models that use the chat completions API support [function calling](../how-to/function-calling.md). Unfortunately, functions defined in your chat completion calls don't always perform as expected. Fine-tuning your model with function calling examples can improve model output by enabling you to:
50-
51-
* Get similarly formatted responses even when the full function definition isn't present. (Allowing you to potentially save money on prompt tokens.)
52-
* Get more accurate and consistent outputs.
53-
54-
When constructing a training file of function calling examples, you would take a function definition like this:
55-
56-
```json
57-
{
58-
"messages": [
59-
{"role": "user", "content": "What is the weather in San Francisco?"},
60-
{"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}
61-
],
62-
"functions": [{
63-
"name": "get_current_weather",
64-
"description": "Get the current weather",
65-
"parameters": {
66-
"type": "object",
67-
"properties": {
68-
"location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"},
69-
"format": {"type": "string", "enum": ["celsius", "fahrenheit"]}
70-
},
71-
"required": ["location", "format"]
72-
}
73-
}]
74-
}
75-
```
76-
77-
And express the information as a single line within your `.jsonl` training file as below:
78-
79-
```jsonl
80-
{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"}}], "functions": [{"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and country, eg. San Francisco, USA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"]}}, "required": ["location", "format"]}}]}
81-
```
82-
83-
As with all fine-tuning training your example file requires at least 10 examples.
84-
85-
### Optimize for cost
86-
87-
OpenAI recommends that if you're trying to optimize to use fewer prompt tokens post fine-tuning your model on the full function definitions you can experiment with:
88-
89-
* Omit function and parameter descriptions: remove the description field from function and parameters.
90-
* Omit parameters: remove the entire properties field from the parameters object.
91-
* Omit function entirely: remove the entire function object from the functions array.
92-
93-
### Optimize for quality
94-
95-
Alternatively, if you're trying to improve the quality of the function calling output, it's recommended that the function definitions present in the fine-tuning training dataset and subsequent chat completion calls remain identical.
96-
97-
### Customize model responses to function outputs
98-
99-
Fine-tuning based on function calling examples can also be used to improve the model's response to function outputs. To accomplish this, you include examples consisting of function response messages and assistant response messages where the function response is interpreted and put into context by the assistant.
100-
101-
```json
102-
{
103-
"messages": [
104-
{"role": "user", "content": "What is the weather in San Francisco?"},
105-
{"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}}
106-
{"role": "function", "name": "get_current_weather", "content": "21.0"},
107-
{"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}
108-
],
109-
"functions": [...] // same as before
110-
}
111-
```
112-
113-
As with the example before, this example is artificially expanded for readability. The actual entry in the `.jsonl` training file would be a single line:
114-
115-
```jsonl
116-
{"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}, {"role": "assistant", "function_call": {"name": "get_current_weather", "arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celcius\"}"}}, {"role": "function", "name": "get_current_weather", "content": "21.0"}, {"role": "assistant", "content": "It is 21 degrees celsius in San Francisco, CA"}], "functions": []}
117-
```
118-
11944
## Troubleshooting
12045

12146
### How do I enable fine-tuning? Create a custom model is greyed out in Azure OpenAI Studio?

0 commit comments

Comments
 (0)