Skip to content

Commit b863046

Browse files
committed
Acrolinx fixes
1 parent eee321e commit b863046

13 files changed

+163
-163
lines changed
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.introduction
3-
title: Introduction
4-
metadata:
5-
title: Introduction
6-
description: "Introduction"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 2
14-
content: |
15-
[!include[](includes/01-introduction.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.introduction
3+
title: Introduction
4+
metadata:
5+
title: Introduction
6+
description: "Introduction"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 2
14+
content: |
15+
[!include[](includes/01-introduction.md)]
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.what-is-generative-ai
3-
title: Understand Generative AI
4-
metadata:
5-
title: Understand Generative AI
6-
description: "Understand Generative AI"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 5
14-
content: |
15-
[!include[](includes/02-what-is-generative-ai.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.what-is-generative-ai
3+
title: Understand Generative AI
4+
metadata:
5+
title: Understand Generative AI
6+
description: "Understand Generative AI"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 5
14+
content: |
15+
[!include[](includes/02-what-is-generative-ai.md)]
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.what-are-large-language-models
3-
title: Understand Large Language Models (LLMs)
4-
metadata:
5-
title: Understand Large Language Models (LLMs)
6-
description: "Understand Large Language Models (LLMs)"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 5
14-
content: |
15-
[!include[](includes/03-what-are-large-language-models.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.what-are-large-language-models
3+
title: Understand Large Language Models (LLMs)
4+
metadata:
5+
title: Understand Large Language Models (LLMs)
6+
description: "Understand Large Language Models (LLMs)"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 5
14+
content: |
15+
[!include[](includes/03-what-are-large-language-models.md)]
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.key-components-llms
3-
title: Identify key components of LLM applications
4-
metadata:
5-
title: Identify key components of LLM applications
6-
description: "Identify key components of LLM applications"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 9
14-
content: |
15-
[!include[](includes/04-key-components-llms.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.key-components-llms
3+
title: Identify key components of LLM applications
4+
metadata:
5+
title: Identify key components of LLM applications
6+
description: "Identify key components of LLM applications"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 9
14+
content: |
15+
[!include[](includes/04-key-components-llms.md)]
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.use-llms
3-
title: Use LLMs for Natural Language Processing (NLP) tasks
4-
metadata:
5-
title: Use LLMs for Natural Language Processing (NLP) tasks
6-
description: "Use LLMs for Natural Language Processing (NLP) tasks"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 9
14-
content: |
15-
[!include[](includes/05-use-llms.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.use-llms
3+
title: Use LLMs for Natural Language Processing (NLP) tasks
4+
metadata:
5+
title: Use LLMs for Natural Language Processing (NLP) tasks
6+
description: "Use LLMs for Natural Language Processing (NLP) tasks"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 9
14+
content: |
15+
[!include[](includes/05-use-llms.md)]
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.exercise
3-
title: Exercise - Explore language models
4-
metadata:
5-
title: Exercise - Explore language models
6-
description: "Exercise - Explore language models"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 30
14-
content: |
15-
[!include[](includes/06-exercise.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.exercise
3+
title: Exercise - Explore language models
4+
metadata:
5+
title: Exercise - Explore language models
6+
description: "Exercise - Explore language models"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 30
14+
content: |
15+
[!include[](includes/06-exercise.md)]
Lines changed: 50 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,50 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.knowledge-check
3-
title: Module assessment
4-
metadata:
5-
title: Module assessment
6-
description: "Knowledge check"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
module_assessment: true
12-
azureSandbox: false
13-
labModal: false
14-
durationInMinutes: 3
15-
quiz:
16-
questions:
17-
- content: "What is the primary function of tokenization in Large Language Models (LLMs)?"
18-
choices:
19-
- content: "To generate responses for user queries."
20-
isCorrect: false
21-
explanation: "Incorrect. The primary function of tokenization isn't generating responses for user queries."
22-
- content: "To convert text into smaller units for easier processing."
23-
isCorrect: true
24-
explanation: "Correct. Tokenization is a preprocessing step in LLMs where text is broken down into smaller units, such as words, subwords, or characters. Tokenization makes it easier for the model to process and understand the text."
25-
- content: "To summarize long texts into shorter versions."
26-
isCorrect: false
27-
explanation: "Incorrect. Summarization is summarizing long texts into shorter versions and it isn't the primary function of tokenization."
28-
- content: "Which of the following tasks involves determining the emotional tone of a piece of text?"
29-
choices:
30-
- content: "Summarization"
31-
isCorrect: false
32-
explanation: "Incorrect. Summarization is summarizing long texts into shorter versions and not determining the emotional tone of a piece of text."
33-
- content: "Translation"
34-
isCorrect: false
35-
explanation: "Incorrect. Translation is converting text from one language to multiple languages and not determining the emotional tone of a piece of text."
36-
- content: "Sentiment Analysis"
37-
isCorrect: true
38-
explanation: "Correct. Sentiment analysis is the task of identifying the emotional tone of a text, such as determining if the sentiment is positive, negative, or neutral. Sentiment analysis helps in understanding opinions and feelings expressed in the text."
39-
- content: "In the context of Large Language Models (LLMs), what does zero-shot classification refer to?"
40-
choices:
41-
- content: "Classifying text into predefined categories without any prior training examples."
42-
isCorrect: true
43-
explanation: "Correct. Zero-shot classification involves categorizing text into predefined labels without seeing any labeled examples during training. Zero-shot classification is achieved by using the model's extensive general knowledge and language understanding."
44-
- content: "Training the model on a few examples for a specific task."
45-
isCorrect: false
46-
explanation: "Incorrect. Training the model on a few examples isn't zero-shot classification."
47-
- content: "Generating text responses based on a given prompt."
48-
isCorrect: false
49-
explanation: "Incorrect. Generating text responses based on a given prompt isn't zero-shot classification."
50-
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.knowledge-check
3+
title: Module assessment
4+
metadata:
5+
title: Module assessment
6+
description: "Knowledge check"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
module_assessment: true
12+
azureSandbox: false
13+
labModal: false
14+
durationInMinutes: 3
15+
quiz:
16+
questions:
17+
- content: "What is the primary function of tokenization in Large Language Models (LLMs)?"
18+
choices:
19+
- content: "To generate responses for user queries."
20+
isCorrect: false
21+
explanation: "Incorrect. The primary function of tokenization isn't generating responses for user queries."
22+
- content: "To convert text into smaller units for easier processing."
23+
isCorrect: true
24+
explanation: "Correct. Tokenization is a preprocessing step in LLMs where text is broken down into smaller units, such as words, subwords, or characters. Tokenization makes it easier for the model to process and understand the text."
25+
- content: "To summarize long texts into shorter versions."
26+
isCorrect: false
27+
explanation: "Incorrect. Summarization is summarizing long texts into shorter versions and it isn't the primary function of tokenization."
28+
- content: "Which of the following tasks involves determining the emotional tone of a piece of text?"
29+
choices:
30+
- content: "Summarization"
31+
isCorrect: false
32+
explanation: "Incorrect. Summarization is summarizing long texts into shorter versions and not determining the emotional tone of a piece of text."
33+
- content: "Translation"
34+
isCorrect: false
35+
explanation: "Incorrect. Translation is converting text from one language to multiple languages and not determining the emotional tone of a piece of text."
36+
- content: "Sentiment Analysis"
37+
isCorrect: true
38+
explanation: "Correct. Sentiment analysis is the task of identifying the emotional tone of a text, such as determining if the sentiment is positive, negative, or neutral. Sentiment analysis helps in understanding opinions and feelings expressed in the text."
39+
- content: "In the context of Large Language Models (LLMs), what does zero-shot classification refer to?"
40+
choices:
41+
- content: "Classifying text into predefined categories without any prior training examples."
42+
isCorrect: true
43+
explanation: "Correct. Zero-shot classification involves categorizing text into predefined labels without seeing any labeled examples during training. Zero-shot classification is achieved by using the model's extensive general knowledge and language understanding."
44+
- content: "Training the model on a few examples for a specific task."
45+
isCorrect: false
46+
explanation: "Incorrect. Training the model on a few examples isn't zero-shot classification."
47+
- content: "Generating text responses based on a given prompt."
48+
isCorrect: false
49+
explanation: "Incorrect. Generating text responses based on a given prompt isn't zero-shot classification."
50+
Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
1-
### YamlMime:ModuleUnit
2-
uid: learn.wwl.introduction-language-models-databricks.summary
3-
title: Summary
4-
metadata:
5-
title: Summary
6-
description: "Summary"
7-
ms.date: 03/20/2025
8-
author: wwlpublish
9-
ms.author: theresai
10-
ms.topic: unit
11-
azureSandbox: false
12-
labModal: false
13-
durationInMinutes: 1
14-
content: |
15-
[!include[](includes/08-summary.md)]
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language-models-databricks.summary
3+
title: Summary
4+
metadata:
5+
title: Summary
6+
description: "Summary"
7+
ms.date: 07/07/2025
8+
author: theresa-i
9+
ms.author: theresai
10+
ms.topic: unit
11+
azureSandbox: false
12+
labModal: false
13+
durationInMinutes: 1
14+
content: |
15+
[!include[](includes/08-summary.md)]

learn-pr/wwl-data-ai/introduction-language-models-databricks/includes/03-what-are-large-language-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ Let's start by exploring what LLMs are.
1111
:::image type="content" source="../media/02-large-language-model.png" alt-text="Diagram of LLMs and foundation models as part of Generative AI.":::
1212

1313
1. **Generative AI** refers to systems that can create new content, such as text, images, audio, or video.
14-
1. **Large Language Models** (**LLMs**) are a type of Generative AI that focus on language-related tasks.
14+
1. **Large Language Models** (**LLMs**) are a type of Generative AI that focuses on language-related tasks.
1515
1. **Foundation models** are the underlying models that serve as the basis for AI applications. The models are trained on broad and diverse datasets and can be adapted to a wide range of downstream tasks.
1616

1717
When you want to achieve Generative AI, you can use LLMs to generate new content. You can use a publicly available foundation model as an LLM, or you can choose to train your own.

learn-pr/wwl-data-ai/introduction-language-models-databricks/includes/04-key-components-llms.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
**Large Language Models** (**LLMs**) are like sophisticated language processing systems designed to understand and generate human language. Think of them as having four essential parts that work together, similar to how a car needs an engine, fuel system, transmission, and steering wheel to function properly.
22

3-
- **Prompt**: Your instructions to the model. The prompt is how you communicate with the LLM. It's your question, request or instruction.
3+
- **Prompt**: Your instructions to the model. The prompt is how you communicate with the LLM. It's your question, request, or instruction.
44
- **Tokenizer**: Breaks down language. The tokenizer is a language translator that converts human text into a format the computer can understand.
5-
- **Model**: The 'brain' of the operation. The model is the actual 'brain' that processes information and generates responses. It is typically based on the transformer architecture, utilizes self-attention mechanisms to process text and generates contextually relevant responses.
5+
- **Model**: The 'brain' of the operation. The model is the actual 'brain' that processes information and generates responses. It's typically based on the transformer architecture, utilizes self-attention mechanisms to process text, and generates contextually relevant responses.
66
- **Tasks**: What LLMs can do. Tasks are the different language-related jobs that LLMs can perform, such as text classification, translation, and dialogue generation.
77

88
These components create a powerful language processing system:
@@ -82,7 +82,7 @@ Let's use this diagram as an example of how LLM processing works.
8282
The **LLM** is trained on a large volume of natural language text.
8383
**Step1: Input** Training documents and a prompt "When my dog was..." enter the system.
8484
**Step 2: Encoder (The analyzer)** Breaks text into **tokens** and analyzes its meaning. The **encoder** block processes token sequences using **self-attention** to determine the relationships between tokens or words.
85-
**Step 3: Embeddings are created** The output from the encoder is a collection of **vectors** (multi-valued numeric arrays) in which each element of the vector represents a semantic attribute of the tokens. These vectors are referred to as **embeddings**. They are numerical representations that capture meaning:
85+
**Step 3: Embeddings are created** The output from the encoder is a collection of **vectors** (multi-valued numeric arrays) in which each element of the vector represents a semantic attribute of the tokens. These vectors are referred to as **embeddings**. They're numerical representations that capture meaning:
8686

8787
- **dog [10,3,2]** - animal, pet, subject
8888
- **cat [10,3,1]** - animal, pet, different species

0 commit comments

Comments
 (0)