Skip to content

Commit 6965600

Browse files
authored
Update fine-tuning-overview.md
Added a use cases section, updated the availability section
1 parent f9d8624 commit 6965600

File tree

1 file changed

+28
-12
lines changed

1 file changed

+28
-12
lines changed

articles/ai-foundry/concepts/fine-tuning-overview.md

Lines changed: 28 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,37 +17,49 @@ author: sdgilley
1717

1818
# Fine-tune models with Azure AI Foundry
1919

20-
Fine-tuning customizes a pretrained AI model with additional training on a specific task or dataset to improve performance, add new skills, or enhance accuracy. The result is a new, optimized GenAI model based on the provided examples. This article walks you through use-cases for fine-tuning and how it helps you in your GenAI journey.
20+
Fine-tuning customizes a pretrained AI model with additional training on a specific task or dataset to improve performance, add new skills, or enhance accuracy. The result is a new, optimized GenAI model based on the provided examples. This article walks you through key concepts and decisions you'll need to make before you fine tune, including the type of fine tuning that's right for your use case, and model selection criteria based on training techniques use-cases for fine-tuning and how it helps you in your GenAI journey.
2121

2222
If you're just getting started with fine-tuning, we recommend **GPT-4.1** for complex skills like language translation, domain adaptation, or advanced code generation. For more focused tasks (such as classification, sentiment analysis, or content moderation) or when distilling knowledge from a more sophisticated model, start with **GPT-4.1-mini** for faster iteration and lower costs.
2323

2424
:::image type="content" source="../media/concepts/model-catalog-fine-tuning.png" alt-text="Screenshot of Azure AI Foundry model catalog and filtering by Fine-tuning tasks." lightbox="../media/concepts/model-catalog-fine-tuning.png":::
2525

26+
## Top use cases for fine-tuning
27+
Fine tuning excels at customizing language models for specific applications and domains. Some key use cases include:
28+
- **Domain Specialization:** Adapt a language model for a specialized field like medicine, finance, or law – where domain specific knowledge and terminology is important. Teach the model to understand technical jargon and provide more accurate responses.
29+
- **Task Performance:** Optimize a model for a specific task like sentiment analysis, code generation, translation, or summarization. You can significantly improve the performance of a smaller model on a specific application, compared to a general purpose model.
30+
- **Style and Tone:** Teach the model to match your preferred communication style – for example, adapt the model for formal business writing, brand-specific voice, or technical writing.
31+
- **Instruction Following:** Improve the model’s ability to follow specific formatting requirements, multi-step instructions, or structured outputs. In multi-agent frameworks, teach the model to call the right agent for the right task.
32+
- **Compliance and Safety:** Train a fine-tuned model to adhere to organizational policies, regulatory requirements, or other guidelines unique to your application.
33+
- **Language or Cultural Adaptation:** Tailor a language model for a specific language, dialect, or cultural context that may not be well represented in the training data.
34+
Fine tuning is especially valuable when a general-purpose model doesn’t meet your specific requirements – but you want to avoid the cost and complexity of training a model from scratch.
35+
2636
## Serverless or Managed Compute?
37+
Before picking a model, it's important to select the fine tuning product that matches your needs. Azure's AI Foundry offers two primary modalities for fine tuning: serverless and managed compute.
2738

2839
- **Serverless** lets you customize models using our capacity with consumption-based pricing starting at $1.70 per million input tokens. We optimize training for speed and scalability while handling all infrastructure management. This approach requires no GPU quotas and provides exclusive access to OpenAI models, though with fewer hyperparameter options than managed compute.
2940
- **Managed compute** offers a wider range of models and advanced customization through AzureML, but requires you to provide your own VMs for training and hosting. While this gives full control over resources, it demands high quotas that many customers lack, doesn't include OpenAI models, and can't leverage our multi-tenancy optimizations.
3041

3142
For most customers, serverless provides the best balance of ease-of-use, cost efficiency, and access to premium models. This document focuses on serverless options.
3243

33-
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md) or [Fine-tune models using managed compute](how-to/fine-tune-managed-compute.md).
44+
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md) or [Fine-tune models using managed compute](how-to/fine-tune-managed-compute.md). For detailed guidance on OpenAI fine tuning see [Fine-tune Azure OpenAI Models](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?context=%2Fazure%2Fai-foundry%2Fcontext%2Fcontext&tabs=azure-openai&pivots=programming-language-studio).
3445

3546
## Training Techniques
3647

37-
We offer three training techniques to optimize your models:
48+
Once you've identified a use case, you need to select the appropriate training technique - which will in turn guide the model you select for training. We offer three training techniques to optimize your models:
49+
3850
- **Supervised Fine Tuning (SFT):** Foundational technique that trains your model on input-output pairs, teaching it to produce desired responses for specific inputs.
39-
- *Best for:* Most use cases including classification, generation, and task-specific adaptation.
51+
- *Best for:* Most use cases including domain specialization, task performance, style and tone, instruction following, and language adaptation.
4052
- *When to use:* Start here for most projects. SFT addresses the broadest number of fine-tuning scenarios and provides reliable results with clear input-output training data.
4153
- *Supported Models:* GPT 4o, 4o-mini, 4.1, 4.1-mini, 4.1-nano; Llama 2 and Llama 3.1; Phi 4, Phi-4-mini-instruct; Mistral Nemo, Ministral-3B, Mistral Large (2411); NTT Tsuzumi-7b
4254

4355
- **Direct Preference Optimization (DPO):** Trains models to prefer certain types of responses over others by learning from comparative feedback, without requiring a separate reward model.
4456
- *Best for:* Improving response quality, safety, and alignment with human preferences.
45-
- *When to use:* When you have examples of preferred vs. non-preferred outputs, or when you need to optimize for subjective qualities like helpfulness, harmlessness, or style.
57+
- *When to use:* When you have examples of preferred vs. non-preferred outputs, or when you need to optimize for subjective qualities like helpfulness, harmlessness, or style. Use cases include adapting models to a specific syle and tone, or adapting a model to cultural preferences.
4658
- *Supported Models:* GPT 4o, 4.1, 4.1-mini, 4.1-nano
4759

4860
- **Reinforcement Fine Tuning (RFT):** Uses reinforcement learning to optimize models based on reward signals, allowing for more complex optimization objectives.
4961
- *Best for:* Complex optimization scenarios where simple input-output pairs aren't sufficient.
50-
- *When to use:* Advanced use cases requiring optimization for metrics like user engagement, task completion rates, or other measurable outcomes. Requires more ML expertise to implement effectively.
62+
- *When to use:* RFT is ideal for objective domains like mathematics, chemistry, and physics where there are clear right and wrong answers and the model already shows some competency. It works best when lucky guessing is difficult and expert evaluators would consistently agree on an unambiguous, correct answer. Requires more ML expertise to implement effectively.
5163
- *Supported Models:* o4-mini
5264

5365
> Most customers should start with SFT, as it addresses the broadest number of fine-tuning use cases.
@@ -60,8 +72,9 @@ Follow this link to view and download [example datasets](https://github.com/Azur
6072
- **Vision + Text (GPT 4o, 4.1):** Some models support vision fine-tuning, accepting both image and text inputs while producing text outputs. Use cases for vision fine-tuning include interpreting charts, graphs, and visual data; content moderation; visual quality assessment; document processing with mixed text and image; and product cataloging from photographs.
6173

6274
## Model Comparison Table
75+
This table provides an overview of the models available
6376

64-
| Model | Modalities | Techniques | Strengths |
77+
| Model | Modalities | Techniques | Strengths |
6578
|----------------------|---------------|--------------|--------------------------------------|
6679
| GPT 4.1 | Text, Vision | SFT, DPO | Superior performance on sophisticated tasks, nuanced understanding |
6780
| GPT 4.1-mini | Text | SFT, DPO | Fast iteration, cost-effective, good for simple tasks |
@@ -72,22 +85,25 @@ Follow this link to view and download [example datasets](https://github.com/Azur
7285
| Mistral Nemo | Text | SFT | Balance between size and capability |
7386
| Mistral Large (2411) | Text | SFT | Most capable Mistral model, better for complex tasks |
7487

75-
## Model selection
88+
## Get Started with Fine Tuning
7689

7790
1. **Define your use case:** Identify whether you need a highly capable general-purpose model (e.g. GPT 4.1), a smaller cost-effective model for a specific task (GPT 4.1-mini or nano), or a complex reasoning model (o4-mini).
7891
2. **Prepare your data:** Start with 50-100 high-quality examples for initial testing, scaling to 500+ examples for production models.
7992
3. **Choose your technique:** Begin with Supervised Fine Tuning (SFT) unless you have specific requirements for reasoning models / RFT.
8093
4. **Iterate and evaluate:** Fine-tuning is an iterative process—start with a baseline, measure performance, and refine your approach based on results.
8194

82-
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md) or [Fine-tune models using managed compute](how-to/fine-tune-managed-compute.md).
95+
To find steps to fine-tuning a model in AI Foundry, see [Fine-tune Models in AI Foundry](../how-to/fine-tune-serverless.md), [Fine-tune Azure OpenAI Models](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?context=%2Fazure%2Fai-foundry%2Fcontext%2Fcontext&tabs=azure-openai&pivots=programming-language-studio), or [Fine-tune models using managed compute](how-to/fine-tune-managed-compute.md).
8396

84-
## Supported models for fine-tuning
97+
## Fine Tuning Availability
8598

8699
Now that you know when to use fine-tuning for your use case, you can go to Azure AI Foundry to find models available to fine-tune.
87100

88-
Fine-tuning is available in specific Azure regions for some models that are deployed via standard deployments. To fine-tune such models, a user must have a hub/project in the region where the model is available for fine-tuning. See [Region availability for models in standard deployment](../how-to/deploy-models-serverless-availability.md) for detailed information.
101+
**If you are fine tuning an OpenAI model** you can use an Azure OpenAI Resource, a Foundry resource or default project, or a hub/project. GPT 4.1, 4.1-mini and 4.1-nano are available in all regions with Global Training. For regional availability, see [Regional Availability and Limits for Azure OpenAI Fine Tuning](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models?tabs=global-standard%2Cstandard-chat-completions#fine-tuning-models)
102+
103+
**If you are fine tuning a non-OpenAI model using Serverless** you must have a hub/project in the region where the model is available for fine tuning. See [Region availability for models in standard deployment](../how-to/deploy-models-serverless-availability.md) for detailed information.
104+
105+
**If you are fine tunign a model using Managed Compute** you must have a hub/project and available VM quota for training and inferencing. See [Fine-tune models using managed compute (preview)](../how-to/fine-tune-managed-compute.md) for more details.
89106

90-
For details about Azure OpenAI in Azure AI Foundry Models that are available for fine-tuning, see the [Azure OpenAI in Foundry Models documentation.](../../ai-services/openai/concepts/models.md#fine-tuning-models)
91107

92108
## Related content
93109

0 commit comments

Comments
 (0)