Skip to content

Commit 3ba2cf3

Browse files
author
Sherry Yang
committed
Update.
1 parent 5b6c6d7 commit 3ba2cf3

File tree

11 files changed

+55
-44
lines changed

11 files changed

+55
-44
lines changed
Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
### YamlMime:ModuleUnit
22
uid: learn.wwl.introduction-language.how-it-works
3-
title: How it works
3+
title: General principles of NLP
44
metadata:
5-
title: How it works
6-
description: "How it works"
5+
title: General principles of NLP
6+
description: "General principles of NLP"
77
ms.date: 5/21/2025
88
author: wwlpublish
99
ms.author: sheryang
1010
ms.topic: unit
1111
ms.custom:
1212
- N/A
13-
durationInMinutes: 6
13+
durationInMinutes: 4
1414
content: |
1515
[!include[](includes/2-how-it-works.md)]
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
### YamlMime:ModuleUnit
2+
uid: learn.wwl.introduction-language.semantic-models
3+
title: Understand semantic language models
4+
metadata:
5+
title: Understand semantic language models
6+
description: "Understand semantic language models"
7+
ms.date: 5/21/2025
8+
author: wwlpublish
9+
ms.author: sheryang
10+
ms.topic: unit
11+
ms.custom:
12+
- N/A
13+
durationInMinutes: 3
14+
content: |
15+
[!include[](includes/3-semantic-models.md)]

learn-pr/wwl-data-ai/introduction-language/3-text-analysis.yml renamed to learn-pr/wwl-data-ai/introduction-language/4-text-analysis.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ metadata:
1212
- N/A
1313
durationInMinutes: 4
1414
content: |
15-
[!include[](includes/3-text-analysis.md)]
15+
[!include[](includes/4-text-analysis.md)]

learn-pr/wwl-data-ai/introduction-language/5-summary.yml renamed to learn-pr/wwl-data-ai/introduction-language/6-summary.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,4 @@ metadata:
1212
- N/A
1313
durationInMinutes: 1
1414
content: |
15-
[!include[](includes/5-summary.md)]
15+
[!include[](includes/6-summary.md)]

learn-pr/wwl-data-ai/introduction-language/includes/1-introduction.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,5 +6,4 @@ Natural language processing might be used to create:
66
- A document search application that summarizes documents in a catalog.
77
- An application that extracts brands and company names from text.
88

9-
In this module, let's explore natural language processing.
10-
9+
Next, let's examine some general principles and common techniques used to perform text analysis and other NLP tasks.

learn-pr/wwl-data-ai/introduction-language/includes/2-how-it-works.md

Lines changed: 0 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
Let's examine some general principles and common techniques used to perform text analysis and other natural language processing (NLP) tasks.
2-
31
Some of the earliest techniques used to analyze text with computers involve statistical analysis of a body of text (a *corpus*) to infer some kind of semantic meaning. Put simply, if you can determine the most commonly used words in a given document, you can often get a good idea of what the document is about.
42

53
## Tokenization
@@ -49,37 +47,3 @@ For example, consider the following restaurant reviews, which are already labele
4947

5048
With enough labeled reviews, you can train a classification model using the tokenized text as *features* and the sentiment (0 or 1) a *label*. The model will encapsulate a relationship between tokens and sentiment - for example, reviews with tokens for words like `"great"`, `"tasty"`, or `"fun"` are more likely to return a sentiment of **1** (*positive*), while reviews with words like `"terrible"`, `"slow"`, and `"substandard"` are more likely to return **0** (*negative*).
5149

52-
## Semantic language models
53-
54-
As the state of the art for NLP has advanced, the ability to train models that encapsulate the semantic relationship between tokens has led to the emergence of powerful language models. At the heart of these models is the encoding of language tokens as vectors (multi-valued arrays of numbers) known as *embeddings*.
55-
56-
It can be useful to think of the elements in a token embedding vector as coordinates in multidimensional space, so that each token occupies a specific "location." The closer tokens are to one another along a particular dimension, the more semantically related they are. In other words, related words are grouped closer together. As a simple example, suppose the embeddings for our tokens consist of vectors with three elements, for example:
57-
58-
```
59-
- 4 ("dog"): [10.3.2]
60-
- 5 ("bark"): [10,2,2]
61-
- 8 ("cat"): [10,3,1]
62-
- 9 ("meow"): [10,2,1]
63-
- 10 ("skateboard"): [3,3,1]
64-
```
65-
66-
We can plot the location of tokens based on these vectors in three-dimensional space, like this:
67-
68-
![A diagram of tokens plotted on a three-dimensional space.](../media/example-embeddings-graph.png)
69-
70-
The locations of the tokens in the embeddings space include some information about how closely the tokens are related to one another. For example, the token for `"dog"` is close to `"cat"` and also to `"bark"`. The tokens for `"cat"` and `"bark"` are close to `"meow"`. The token for `"skateboard"` is further away from the other tokens.
71-
72-
The language models we use in industry are based on these principles but have greater complexity. For example, the vectors used generally have many more dimensions. There are also multiple ways you can calculate appropriate embeddings for a given set of tokens. Different methods result in different predictions from natural language processing models.
73-
74-
A generalized view of most modern natural language processing solutions is shown in the following diagram. A large corpus of raw text is tokenized and used to train language models, which can support many different types of natural language processing task.
75-
76-
![A diagram of the process to tokenize text and train a language model that supports natural language processing tasks.](../media/language-model.png)
77-
78-
Common NLP tasks supported by language models include:
79-
- Text analysis, such as extracting key terms or identifying named entities in text.
80-
- Sentiment analysis and opinion mining to categorize text as *positive* or *negative*.
81-
- Machine translation, in which text is automatically translated from one language to another.
82-
- Summarization, in which the main points of a large body of text are summarized.
83-
- Conversational AI solutions such as *bots* or *digital assistants* in which the language model can interpret natural language input and return an appropriate response.
84-
85-
Next, let's learn more about the capabilities made possible by langauge models.
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
As the state of the art for NLP has advanced, the ability to train models that encapsulate the semantic relationship between tokens has led to the emergence of powerful language models. At the heart of these models is the encoding of language tokens as vectors (multi-valued arrays of numbers) known as *embeddings*.
2+
3+
It can be useful to think of the elements in a token embedding vector as coordinates in multidimensional space, so that each token occupies a specific "location." The closer tokens are to one another along a particular dimension, the more semantically related they are. In other words, related words are grouped closer together. As a simple example, suppose the embeddings for our tokens consist of vectors with three elements, for example:
4+
5+
```
6+
- 4 ("dog"): [10.3.2]
7+
- 5 ("bark"): [10,2,2]
8+
- 8 ("cat"): [10,3,1]
9+
- 9 ("meow"): [10,2,1]
10+
- 10 ("skateboard"): [3,3,1]
11+
```
12+
13+
We can plot the location of tokens based on these vectors in three-dimensional space, like this:
14+
15+
![A diagram of tokens plotted on a three-dimensional space.](../media/example-embeddings-graph.png)
16+
17+
The locations of the tokens in the embeddings space include some information about how closely the tokens are related to one another. For example, the token for `"dog"` is close to `"cat"` and also to `"bark"`. The tokens for `"cat"` and `"bark"` are close to `"meow"`. The token for `"skateboard"` is further away from the other tokens.
18+
19+
The language models we use in industry are based on these principles but have greater complexity. For example, the vectors used generally have many more dimensions. There are also multiple ways you can calculate appropriate embeddings for a given set of tokens. Different methods result in different predictions from natural language processing models.
20+
21+
A generalized view of most modern natural language processing solutions is shown in the following diagram. A large corpus of raw text is tokenized and used to train language models, which can support many different types of natural language processing task.
22+
23+
![A diagram of the process to tokenize text and train a language model that supports natural language processing tasks.](../media/language-model.png)
24+
25+
Common NLP tasks supported by language models include:
26+
- Text analysis, such as extracting key terms or identifying named entities in text.
27+
- Sentiment analysis and opinion mining to categorize text as *positive* or *negative*.
28+
- Machine translation, in which text is automatically translated from one language to another.
29+
- Summarization, in which the main points of a large body of text are summarized.
30+
- Conversational AI solutions such as *bots* or *digital assistants* in which the language model can interpret natural language input and return an appropriate response.
31+
32+
Next, let's learn more about the capabilities made possible by langauge models.
File renamed without changes.

0 commit comments

Comments
 (0)