Skip to content

Commit 924f54d

Browse files
Merge pull request #50720 from MicrosoftDocs/rmcmurray-patch-1
Fixing noun/verb agreement
2 parents 914ec7e + 35dc7ae commit 924f54d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/3b-use-language-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ In general, language models can be considered in two categories: *Large Language
55

66
|Large Language Models (LLMs)|Small Language Models (SLMs)|
77
|-|-|
8-
|LLMs are trained with vast quantities of text that represents a wide range of general subject matter – typically by sourcing data from the Internet and other generally available publications.| SLMs are trained with smaller, more subject-focused datasets|
8+
|LLMs are trained with vast quantities of text that represent a wide range of general subject matter – typically by sourcing data from the Internet and other generally available publications.| SLMs are trained with smaller, more subject-focused datasets|
99
|When trained, LLMs have many billions (even trillions) of parameters (weights that can be applied to vector embeddings to calculate predicted token sequences).|Typically have fewer parameters than LLMs.|
1010
|Able to exhibit comprehensive language generation capabilities in a wide range of conversational contexts.|This focused vocabulary makes them effective in specific conversational topics, but less effective at more general language generation.|
1111
|Their large size can impact their performance and make them difficult to deploy locally on devices and computers.|The smaller size of SLMs can provide more options for deployment, including local deployment to devices and on-premises computers; and makes them faster and easier to fine-tune.|

0 commit comments

Comments
 (0)