You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -26,6 +26,9 @@ In contrast to few-shot learning, fine tuning improves the model by training on
26
26
27
27
We use LoRA, or low rank approximation, to fine-tune models in a way that reduces their complexity without significantly affecting their performance. This method works by approximating the original high-rank matrix with a lower rank one, thus only fine-tuning a smaller subset of "important" parameters during the supervised training phase, making the model more manageable and efficient. For users, this makes training faster and more affordable than other techniques.
28
28
29
+
> [!NOTE]
30
+
> Azure OpenAI currently only supports text-to-text fine-tuning for all supported models including GPT-4o mini.
31
+
29
32
::: zone pivot="programming-language-studio"
30
33
31
34
[!INCLUDE [Azure OpenAI Studio fine-tuning](../includes/fine-tuning-studio.md)]
0 commit comments