You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/fine-tuning-considerations.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,9 +66,9 @@ Here’s an example: A customer wanted to use GPT-3.5-Turbo to turn natural lang
66
66
**If you are ready for fine-tuning you:**
67
67
68
68
- Have clear examples on how you have approached the challenges in alternate approaches and what’s been tested as possible resolutions to improve performance.
69
-
-You've identified shortcomings using a base model, such as inconsistent performance on edge cases, inability to fit enough few shot prompts in the context window to steer the model, high latency, etc.
69
+
-Identified shortcomings using a base model, such as inconsistent performance on edge cases, inability to fit enough few shot prompts in the context window to steer the model, high latency, etc.
70
70
71
-
**Common signs you might not be ready for fine-tuning yet:**
71
+
**Common signs you might not be ready for fine-tuning include:**
72
72
73
73
- Insufficient knowledge from the model or data source.
74
74
- Inability to find the right data to serve the model.
@@ -86,9 +86,9 @@ Another important point is even with high quality data if your data isn't in the
86
86
87
87
**If you are ready for fine-tuning you:**
88
88
89
-
-Have identified a dataset for fine-tuning.
90
-
-The dataset is in the appropriate format for training.
91
-
-Some level of curation has been employed to ensure dataset quality.
89
+
-Identified a dataset for fine-tuning.
90
+
-Formatted the dataset appropriately for training.
91
+
-Curated the dataset to ensure quality.
92
92
93
93
**Common signs you might not be ready for fine-tuning yet:**
94
94
@@ -103,4 +103,4 @@ There isn’t a single right answer to this question, but you should have clearl
103
103
104
104
- Watch the [Azure AI Show episode: "To fine-tune or not to fine-tune, that is the question"](https://www.youtube.com/watch?v=0Jo-z-MFxJs)
105
105
- Learn more about [Azure OpenAI fine-tuning](../how-to/fine-tuning.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/fine-tuning-unified.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,12 @@ author: mrbullwinkle
11
11
ms.author: mbullwin
12
12
---
13
13
14
-
There are two unique fine-tuning experiences in Azure AI Foundry portal. Both allow you to fine-tune Azure OpenAI models, but only the Hub/Project view supports fine-tuning non Azure OpenAI models. If you are only using the Azure OpenAI fine-tuning experience which is available anytime you select a resource in a region where fine-tuning is supported.
14
+
There are two unique fine-tuning experiences in the Azure AI Foundry portal:
15
+
16
+
*[Hub/Project view](https://ai.azure.com) - supports fine-tuning models from multiple providers including Azure OpenAI, Meta Llama, Microsoft Phi, etc.
17
+
*[Azure OpenAI centric view](https://oai.azure.com) - only supports fine-tuning Azure OpenAI models, but has support for additional features like the [Weights & Biases (W&B) preview integration](../how-to/weights-and-biases-integration.md).
18
+
19
+
If you are only fine-tuning Azure OpenAI models, we recommend the Azure OpenAI centric fine-tuning experience which is available by navigating to [https://oai.azure.com](https://oai.azure.com).
0 commit comments