Skip to content

Commit b3a618d

Browse files
authored
Merge pull request #51318 from theresa-i/fine-tune
Content edits to fine tuning module
2 parents 15ec01a + 5905d18 commit b3a618d

14 files changed

+203
-277
lines changed
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.introduction
3-
title: Introduction
4-
metadata:
5-
title: Introduction
6-
description: "Introduction"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 2
14-
content: |
15-
[!include[](includes/1-introduction.md)]
16-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.introduction
3+
title: Introduction
4+
metadata:
5+
title: Introduction
6+
description: "Introduction"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 2
14+
content: |
15+
[!include[](includes/1-introduction.md)]
16+
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.fine-tune-concept
3-
title: What is fine-tuning?
4-
metadata:
5-
title: What is fine-tuning?
6-
description: "What is fine-tuning?"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 6
14-
content: |
15-
[!include[](includes/2-fine-tune-concept.md)]
16-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.fine-tune-concept
3+
title: What is fine-tuning?
4+
metadata:
5+
title: What is fine-tuning?
6+
description: "What is fine-tuning?"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 6
14+
content: |
15+
[!include[](includes/2-fine-tune-concept.md)]
16+
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.prepare-data
3-
title: Prepare your data for fine-tuning
4-
metadata:
5-
title: Prepare your data for fine-tuning
6-
description: "Prepare your data for fine-tuning"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 5
14-
content: |
15-
[!include[](includes/3-prepare-data.md)]
16-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.prepare-data
3+
title: Prepare your data for fine-tuning
4+
metadata:
5+
title: Prepare your data for fine-tuning
6+
description: "Prepare your data for fine-tuning"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 5
14+
content: |
15+
[!include[](includes/3-prepare-data.md)]
16+
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.how-to-fine-tune
3-
title: Fine-tune an Azure OpenAI model
4-
metadata:
5-
title: Fine-tune an Azure OpenAI model
6-
description: "Fine-tune an Azure OpenAI model"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 8
14-
content: |
15-
[!include[](includes/4-how-to-fine-tune.md)]
16-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.how-to-fine-tune
3+
title: Fine-tune an Azure OpenAI model
4+
metadata:
5+
title: Fine-tune an Azure OpenAI model
6+
description: "Fine-tune an Azure OpenAI model"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 8
14+
content: |
15+
[!include[](includes/4-how-to-fine-tune.md)]
16+

learn-pr/wwl-data-ai/fine-tune-azure-databricks/5-exercise.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ title: Exercise - Fine-tune an Azure OpenAI model
44
metadata:
55
title: Exercise - Fine-tune an Azure OpenAI model
66
description: "Exercise - Fine-tune an Azure OpenAI model"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
7+
ms.date: 07/09/2025
8+
author: theresa-i
99
ms.author: theresai
1010
ms.topic: unit
1111
azureSandbox: false
Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,50 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.knowledge-check
3-
title: Module assessment
4-
metadata:
5-
title: Module assessment
6-
description: "Knowledge check"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
module_assessment: true
12-
azureSandbox: false
13-
labModal: false
14-
durationInMinutes: 3
15-
quiz:
16-
questions:
17-
- content: "What is the primary benefit of using Azure Databricks for fine-tuning large language models?"
18-
choices:
19-
- content: "Simplified data storage management."
20-
isCorrect: false
21-
explanation: "Incorrect. Simplified data storage management isn't the primary benefit of using Azure Databricks for fine-tuning large language models."
22-
- content: "Seamless integration with GitHub."
23-
isCorrect: false
24-
explanation: "Incorrect. Seamless integration with GitHub isn't the primary benefit of using Azure Databricks for fine-tuning large language models."
25-
- content: "Distributed computing capabilities for handling large-scale data."
26-
isCorrect: true
27-
explanation: "Correct. Azure Databricks is designed to handle large-scale data processing by using distributed computing capabilities. Fine-tuning large language models often require processing vast amounts of data."
28-
- content: "Which of the following options is a key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks?"
29-
choices:
30-
- content: "Deploying the model directly without adjustments."
31-
isCorrect: false
32-
explanation: "Incorrect. Deploying the model directly isn't the key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks"
33-
- content: "Collecting and preparing domain-specific datasets."
34-
isCorrect: true
35-
explanation: "Correct. Fine-tuning a large language model requires the collection and preparation of domain-specific datasets to tailor the model's predictions to a particular task or industry. This step ensures that the model can generalize well to the specific use case it's being fine-tuned for. The other options don't directly contribute to the fine-tuning process."
36-
- content: "Using SQL queries to modify the model architecture."
37-
isCorrect: false
38-
explanation: "Incorrect. Using SQL queries to modify the model architecture isn't the key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks."
39-
- content: "What role does the 'learning rate' parameter play during the fine-tuning process in Azure Databricks?"
40-
choices:
41-
- content: "It determines the size of the training dataset."
42-
isCorrect: false
43-
explanation: "Incorrect. The 'learning rate' doesn't determine the size of the training dataset."
44-
- content: "It controls the step size for weight updates during training"
45-
isCorrect: true
46-
explanation: "Correct. The learning rate is a critical hyperparameter that controls how much to change the model's weights with respect to the loss gradient. A learning rate that is too high might cause the model to converge too quickly to a suboptimal solution, while a learning rate that is too low could result in a prolonged training process."
47-
- content: "It decides the number of layers to freeze in the model"
48-
isCorrect: false
49-
explanation: "Incorrect. The 'learning rate' doesn't decide the number of layers to freeze in the model."
50-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.knowledge-check
3+
title: Module assessment
4+
metadata:
5+
title: Module assessment
6+
description: "Knowledge check"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
module_assessment: true
12+
azureSandbox: false
13+
labModal: false
14+
durationInMinutes: 3
15+
quiz:
16+
questions:
17+
- content: "What is the primary benefit of using Azure Databricks for fine-tuning large language models?"
18+
choices:
19+
- content: "Simplified data storage management."
20+
isCorrect: false
21+
explanation: "Incorrect. Simplified data storage management isn't the primary benefit of using Azure Databricks for fine-tuning large language models."
22+
- content: "Seamless integration with GitHub."
23+
isCorrect: false
24+
explanation: "Incorrect. Seamless integration with GitHub isn't the primary benefit of using Azure Databricks for fine-tuning large language models."
25+
- content: "Distributed computing capabilities for handling large-scale data."
26+
isCorrect: true
27+
explanation: "Correct. Azure Databricks is designed to handle large-scale data processing by using distributed computing capabilities. Fine-tuning large language models often require processing vast amounts of data."
28+
- content: "Which of the following options is a key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks?"
29+
choices:
30+
- content: "Deploying the model directly without adjustments."
31+
isCorrect: false
32+
explanation: "Incorrect. Deploying the model directly isn't the key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks"
33+
- content: "Collecting and preparing domain-specific datasets."
34+
isCorrect: true
35+
explanation: "Correct. Fine-tuning a large language model requires the collection and preparation of domain-specific datasets to tailor the model's predictions to a particular task or industry. This step ensures that the model can generalize well to the specific use case it's being fine-tuned for. The other options don't directly contribute to the fine-tuning process."
36+
- content: "Using SQL queries to modify the model architecture."
37+
isCorrect: false
38+
explanation: "Incorrect. Using SQL queries to modify the model architecture isn't the key step in fine-tuning a large language model using Azure OpenAI within Azure Databricks."
39+
- content: "What role does the 'learning rate' parameter play during the fine-tuning process in Azure Databricks?"
40+
choices:
41+
- content: "It determines the size of the training dataset."
42+
isCorrect: false
43+
explanation: "Incorrect. The 'learning rate' doesn't determine the size of the training dataset."
44+
- content: "It controls the step size for weight updates during training"
45+
isCorrect: true
46+
explanation: "Correct. The learning rate is a critical hyperparameter that controls how much to change the model's weights with respect to the loss gradient. A learning rate that is too high might cause the model to converge too quickly to a suboptimal solution, while a learning rate that is too low could result in a prolonged training process."
47+
- content: "It decides the number of layers to freeze in the model"
48+
isCorrect: false
49+
explanation: "Incorrect. The 'learning rate' doesn't decide the number of layers to freeze in the model."
50+
Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.fine-tune-azure-databricks.summary
3-
title: Summary
4-
metadata:
5-
title: Summary
6-
description: "Summary"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 1
14-
content: |
15-
[!include[](includes/7-summary.md)]
16-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.fine-tune-azure-databricks.summary
3+
title: Summary
4+
metadata:
5+
title: Summary
6+
description: "Summary"
7+
ms.date: 07/09/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 1
14+
content: |
15+
[!include[](includes/7-summary.md)]
16+
Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
Fine-tuning **Large Language Models** (**LLMs**) involves the process of adapting pretrained models, such as GPT-4, to perform specific tasks or operate within a particular domain by training them on a smaller, task-specific dataset.
1+
Fine-tuning **Large Language Models** (**LLMs**) involves adapting pretrained models, such as GPT-4, to perform specific tasks or operate within particular domains by training them on smaller, task-specific datasets.
22

3-
You can use this approach to tap into the general knowledge and language skills of LLMs. Fine-tuning LLMs boost their performance in tasks like sentiment analysis, text generation, or understanding specific domain languages.
3+
You can use fine-tuning to tap into the general knowledge and language capabilities of LLMs while improving their performance for specialized tasks like customer support, technical documentation, or domain-specific question answering.
44

5-
Fine-tuning lets you create models that fit your specific needs. As a result, your model is more accurate and relevant and you save on computational resources and time compared to starting from scratch.
5+
By using fine-tuning, you create models that are more accurate and relevant to your specific use case, while saving computational resources and time compared to training from scratch.

0 commit comments

Comments
 (0)