Skip to content

Commit d4ac648

Browse files
author
Sherry Yang
committed
Update for acrolinx.
1 parent 64c14e4 commit d4ac648

File tree

3 files changed

+14
-14
lines changed

3 files changed

+14
-14
lines changed

learn-pr/wwl-data-ai/fundamentals-generative-ai/8-knowledge-check.yml

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -18,35 +18,35 @@ quiz:
1818
questions:
1919
- content: "What are two key components of transformer architecture that support today's generative AI?"
2020
choices:
21-
- content: "Recurrent Neural Networks(RNNs) and memory retention"
21+
- content: "Recurrent Neural Networks (RNNs) and memory retention"
2222
isCorrect: false
23-
explanation: "RNNs were developed before transformer architecture. Transformer architecture is able to covercome challenges with RNNs."
23+
explanation: "RNNs were developed before transformer architecture. Transformer architecture is able to overcome challenges with RNNs."
2424
- content: "Attention and positional encoding"
2525
isCorrect: true
2626
explanation: "The most important innovations presented in the transformer architecture were *positional encoding* and *multi-head attention*."
2727
- content: "Prompt engineering and groundedness"
2828
isCorrect: false
29-
explanation: Prompt engineering describes the process of prompt improvement, and while important, is not a key component of transformer architecture. Groundedness is a metric that measures response quality.
29+
explanation: Prompt engineering describes the process of prompt improvement, and while important, isn't a key component of transformer architecture. Groundedness is a metric that measures response quality.
3030
- content: "What is the main difference between Large Language Models and Small Language Models?"
3131
choices:
32-
- content: "Large Language Models are trained with vast quantities of text that represents a wide range of general subject matter, while Small Language Models are trained with smaller, more subject-focused datasets."
32+
- content: "Large Language Models are trained with vast quantities of text. The text represents a wide range of general subject matter, while Small Language Models are trained with smaller, more subject-focused datasets."
3333
isCorrect: true
3434
explanation: "Large Language Models are trained with vast quantities of text and have many parameters, while Small Language Models are trained with smaller datasets and have fewer parameters."
35-
- content: "Large Language Models are trained to include an understanding of context, while Small Language Models are not."
35+
- content: "Large Language Models are trained to include an understanding of context, while Small Language Models aren't."
3636
isCorrect: false
37-
explanation: "Both LLMs and SLMs are trained to take into account context."
37+
explanation: "Both large language models (LLMs) and small language models (SLMs) are trained to take into account context."
3838
- content: "Large Language Models have fewer parameters than Small Language Models."
3939
isCorrect: false
4040
explanation: "Incorrect. Large Language Models have many billions (even trillions) of parameters, which is more than Small Language Models."
4141
- content: "What is the purpose of fine-tuning in the context of generative AI?"
4242
choices:
43-
- content: "It is used to manage access, authentication, and data usage in AI models."
43+
- content: "It's used to manage access, authentication, and data usage in AI models."
4444
isCorrect: false
45-
explanation: "This describes the role of security and governance controls, not fine-tuning."
45+
explanation: "This statement describes the role of security and governance controls, not fine-tuning."
4646
- content: "It involves connecting a language model to an organization's proprietary database."
4747
isCorrect: false
48-
explanation: "This describes the function of Retrieval-Augmented Generation, not fine-tuning."
49-
- content: "It involves further training a pre-trained model on a task-specific dataset to make it more suitable for a particular application."
48+
explanation: "This statement describes the function of Retrieval-Augmented Generation, not fine-tuning."
49+
- content: "It involves further training a pretrained model on a task-specific dataset to make it more suitable for a particular application."
5050
isCorrect: true
5151
explanation: "Fine-tuning allows the model to specialize and perform better at specific tasks that require domain-specific knowledge."
5252
- content: "What are the four stages in the process of developing and implementing a plan for responsible AI when using generative models according to Microsoft's guidance?"
@@ -56,9 +56,9 @@ quiz:
5656
explanation: "While it's important to identify and enhance benefits, the focus should be on identifying, measuring, and mitigating potential harms."
5757
- content: "Identify potential harms, Measure these harms, Mitigate the harms, Operate the solution responsibly"
5858
isCorrect: true
59-
explanation: "These are the four stages in the process of developing and implementing a plan for responsible AI when using generative models."
59+
explanation: "These stages are the four stages in the process of developing and implementing a plan for responsible AI when using generative models."
6060
- content: "Define the problem, Design the solution, Develop the solution, Deploy the solution"
6161
isCorrect: false
62-
explanation: "These are general stages of software development, not specific to responsible AI development."
62+
explanation: "These stages are general stages of software development, not specific to responsible AI development."
6363

6464

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/3-language-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
Over the last decades, multiple developments in the field of **natural language processing** (**NLP**) have resulted in achieving **large language models** (**LLMs**). The development and availability of language models led to new ways to interact with applications and systems, such as through generative AI assistants and agents. There are a few key conepts to understand about modern language models:
1+
Over the last decades, multiple developments in the field of **natural language processing** (**NLP**) have resulted in achieving **large language models** (**LLMs**). The development and availability of language models led to new ways to interact with applications and systems, such as through generative AI assistants and agents. There are a few key concepts to understand about modern language models:
22

33
- How they *read*
44
- How they understand the relationship between words

learn-pr/wwl-data-ai/fundamentals-generative-ai/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ summary: "In this module, you explore the way in which language models enable AI
1818
abstract: |
1919
By the end of this module, you are able to:
2020
- Understand what generative AI can do and how it appears in today's applications.
21-
- Understand how lanuage models can understand and generate language.
21+
- Understand how language models can understand and generate language.
2222
- Describe ways you can engineer good prompts and quality responses.
2323
- Describe responsible generative AI concepts.
2424
prerequisites: |

0 commit comments

Comments
 (0)