Skip to content

Commit 5905d18

Browse files
committed
Acrolinx fixes
1 parent 52b1547 commit 5905d18

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

learn-pr/wwl-data-ai/fine-tune-azure-databricks/includes/2-fine-tune-concept.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,17 +14,17 @@ You'll also want to consider fine-tuning when you have sufficient high-quality t
1414

1515
Fine-tuning uses the concept of transfer learning, which means taking knowledge learned from one task and applying it to a related task. This approach leverages a model that has already learned useful representations and adapts it to a new task. The process involves several key steps:
1616

17-
**Start with a foundation model**: You begin with a pretrained LLM that has already learned general language patterns from diverse text data.
17+
**Start with a foundation model**: Begin with a pre-trained LLM that has already learned general language patterns from diverse text data.
1818

19-
**Prepare your training data**: You create a dataset that represents the specific task or domain you want the model to excel at. This data should include examples of the inputs and outputs you expect in your application.
19+
**Prepare your training data**: Create a dataset that represents the specific task or domain you want the model to excel at. This data should include examples of the inputs and outputs you expect in your application.
2020

2121
**Continue training**: The model continues learning by processing your specialized dataset. During this process, the model's parameters are fine-tuned to better capture the patterns in your data while retaining its general language capabilities.
2222

2323
**Optimize for your task**: The model learns to generate responses that are more relevant, accurate, and consistent with your specific requirements.
2424

2525
For example, if you're building a customer support chatbot for a software company, your training data might include historical customer questions and the appropriate responses. You'd also want to include product documentation and troubleshooting guides, along with examples of the tone and style you want the model to use.
2626

27-
The fine-tuning process is more efficient than training from scratch because it leverages the language understanding that the model has already developed, requiring fewer computational resources and less training time.
27+
The fine-tuning process is more efficient than training from scratch because it uses the language understanding that the model has already developed, requiring fewer computational resources and less training time.
2828

2929
## Explore key factors for fine-tuning
3030

@@ -36,7 +36,7 @@ Successful fine-tuning involves balancing several factors:
3636

3737
**Layer selection**: Neural networks are organized in layers, where each layer learns different aspects of language patterns. You can choose to fine-tune all layers of the model or freeze certain layers. Freezing means keeping some layers unchanged to preserve the model's general language understanding while adapting others for specific tasks.
3838

39-
**Dataset quality**: The relevance and quality of your training data directly impact the model's performance. Your data should be representative of real-world scenarios and aligned with your intended use case.
39+
**Dataset quality**: The relevance and quality of your training data affects the model's performance. Your data should be representative of real-world scenarios and aligned with your intended use case.
4040

4141
The goal is to adapt the model to your specific use case while preserving its general language capabilities.
4242

learn-pr/wwl-data-ai/fine-tune-azure-databricks/includes/3-prepare-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,4 +23,4 @@ Include the following key elements in your dataset:
2323
- **Diverse examples**: Include various questions and answers to cover different topics and scenarios. Diverse examples help the model generalize better and handle a wide range of queries.
2424
- **Human-generated responses**: Use human-generated responses to train the model. Human-generated responses ensure that the model learns to generate natural and accurate replies.
2525

26-
Well-prepared data is the foundation of successful fine-tuning, so investing time in creating high-quality, representative training examples will directly impact your model's performance.
26+
Well-prepared data is the foundation of successful fine-tuning, so investing time in creating high-quality, representative training examples will affect your model's performance.

0 commit comments

Comments
 (0)