Skip to content

Commit d251730

Browse files
committed
refresh fine-tuning-overview
1 parent 2cdd3c6 commit d251730

File tree

1 file changed

+11
-9
lines changed

1 file changed

+11
-9
lines changed

articles/ai-studio/concepts/fine-tuning-overview.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.custom:
88
- build-2024
99
- code01
1010
ms.topic: conceptual
11-
ms.date: 10/31/2024
11+
ms.date: 02/21/2025
1212
ms.reviewer: sgilley
1313
ms.author: sgilley
1414
author: sdgilley
@@ -18,7 +18,7 @@ author: sdgilley
1818

1919
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
2020

21-
Fine-tuning refers to customizing a pre-trained generative AI model with additional training on a specific task or new dataset for enhanced performance, new skills, or improved accuracy. The result is a new, custom GenAI model that's optimized based on the provided examples.
21+
Fine-tuning customizes a pretrained AI model with additional training on a specific task or dataset to improve performance, add new skills, or enhance accuracy. The result is a new, optimized GenAI model based on the provided examples.
2222

2323
Consider fine-tuning GenAI models to:
2424
- Scale and adapt to specific enterprise needs
@@ -27,19 +27,20 @@ Consider fine-tuning GenAI models to:
2727
- Save time and resources with faster and more precise results
2828
- Get more relevant and context-aware outcomes as models are fine-tuned for specific use cases
2929

30-
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and selecting the model card to learn detailed information about each model. Specific models may be subjected to regional constraints, [view this list for more details](#supported-models-for-fine-tuning).
30+
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and selecting the model card to learn detailed information about each model. Specific models might be subjected to regional constraints. [View this list for more details](#supported-models-for-fine-tuning).
3131

3232
:::image type="content" source="../media/concepts/model-catalog-fine-tuning.png" alt-text="Screenshot of Azure AI Foundry model catalog and filtering by Fine-tuning tasks." lightbox="../media/concepts/model-catalog-fine-tuning.png":::
3333

34-
This article will walk you through use-cases for fine-tuning and how this can help you in your GenAI journey.
34+
This article walks you through use-cases for fine-tuning and how it helps you in your GenAI journey.
3535

3636
## Getting started with fine-tuning
3737

3838
When starting out on your generative AI journey, we recommend you begin with prompt engineering and RAG to familiarize yourself with base models and its capabilities.
3939
- [Prompt engineering](../../ai-services/openai/concepts/prompt-engineering.md) is a technique that involves designing prompts using tone and style details, example responses, and intent mapping for natural language processing models. This process improves accuracy and relevancy in responses, to optimize the performance of the model.
4040
- [Retrieval-augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) improves LLM performance by retrieving data from external sources and incorporating it into a prompt. RAG can help businesses achieve customized solutions while maintaining data relevance and optimizing costs.
4141

42-
As you get comfortable and begin building your solution, it's important to understand where prompt engineering falls short and that will help you realize if you should try fine-tuning.
42+
As you get comfortable and begin building your solution, it's important to understand where prompt engineering falls short and when you should try fine-tuning.
43+
4344
- Is the base model failing on edge cases or exceptions?
4445
- Is the base model not consistently providing output in the right format?
4546
- Is it difficult to fit enough examples in the context window to steer the model?
@@ -53,16 +54,17 @@ _A customer wants to use GPT-3.5 Turbo to turn natural language questions into q
5354

5455
### Use cases
5556

56-
Base models are already pre-trained on vast amounts of data and most times you'll add instructions and examples to the prompt to get the quality responses that you're looking for - this process is called "few-shot learning". Fine-tuning allows you to train a model with many more examples that you can tailor to meet your specific use-case, thus improving on few-shot learning. This can reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
57+
Base models are already pretrained on vast amounts of data. Most times you add instructions and examples to the prompt to get the quality responses that you're looking for - this process is called "few-shot learning." Fine-tuning allows you to train a model with many more examples that you can tailor to meet your specific use-case, thus improving on few-shot learning. Fine-tuning can reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
58+
59+
Turning natural language into a query language is just one use case where you can "_show not tell_" the model how to behave. Here are some other use cases:
5760

58-
Turning natural language into a query language is just one use case where you can _show not tell_ the model how to behave. Here are some additional use cases:
5961
- Improve the model's handling of retrieved data
6062
- Steer model to output content in a specific style, tone, or format
6163
- Improve the accuracy when you look up information
6264
- Reduce the length of your prompt
63-
- Teach new skills (i.e. natural language to code)
65+
- Teach new skills (that is, natural language to code)
6466

65-
If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model. But there may be a higher upfront cost to training, and you have to pay for hosting your own custom model.
67+
If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model. But there might be a higher upfront cost to training, and you have to pay for hosting your own custom model.
6668

6769
### Steps to fine-tune a model
6870
Here are the general steps to fine-tune a model:

0 commit comments

Comments
 (0)