You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: learn-pr/wwl-data-ai/fundamentals-generative-ai/8-knowledge-check.yml
+12-12Lines changed: 12 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -18,35 +18,35 @@ quiz:
18
18
questions:
19
19
- content: "What are two key components of transformer architecture that support today's generative AI?"
20
20
choices:
21
-
- content: "Recurrent Neural Networks(RNNs) and memory retention"
21
+
- content: "Recurrent Neural Networks(RNNs) and memory retention"
22
22
isCorrect: false
23
-
explanation: "RNNs were developed before transformer architecture. Transformer architecture is able to covercome challenges with RNNs."
23
+
explanation: "RNNs were developed before transformer architecture. Transformer architecture is able to overcome challenges with RNNs."
24
24
- content: "Attention and positional encoding"
25
25
isCorrect: true
26
26
explanation: "The most important innovations presented in the transformer architecture were *positional encoding* and *multi-head attention*."
27
27
- content: "Prompt engineering and groundedness"
28
28
isCorrect: false
29
-
explanation: Prompt engineering describes the process of prompt improvement, and while important, is not a key component of transformer architecture. Groundedness is a metric that measures response quality.
29
+
explanation: Prompt engineering describes the process of prompt improvement, and while important, isn't a key component of transformer architecture. Groundedness is a metric that measures response quality.
30
30
- content: "What is the main difference between Large Language Models and Small Language Models?"
31
31
choices:
32
-
- content: "Large Language Models are trained with vast quantities of text that represents a wide range of general subject matter, while Small Language Models are trained with smaller, more subject-focused datasets."
32
+
- content: "Large Language Models are trained with vast quantities of text. The text represents a wide range of general subject matter, while Small Language Models are trained with smaller, more subject-focused datasets."
33
33
isCorrect: true
34
34
explanation: "Large Language Models are trained with vast quantities of text and have many parameters, while Small Language Models are trained with smaller datasets and have fewer parameters."
35
-
- content: "Large Language Models are trained to include an understanding of context, while Small Language Models are not."
35
+
- content: "Large Language Models are trained to include an understanding of context, while Small Language Models aren't."
36
36
isCorrect: false
37
-
explanation: "Both LLMs and SLMs are trained to take into account context."
37
+
explanation: "Both large language models (LLMs) and small language models (SLMs) are trained to take into account context."
38
38
- content: "Large Language Models have fewer parameters than Small Language Models."
39
39
isCorrect: false
40
40
explanation: "Incorrect. Large Language Models have many billions (even trillions) of parameters, which is more than Small Language Models."
41
41
- content: "What is the purpose of fine-tuning in the context of generative AI?"
42
42
choices:
43
-
- content: "It is used to manage access, authentication, and data usage in AI models."
43
+
- content: "It's used to manage access, authentication, and data usage in AI models."
44
44
isCorrect: false
45
-
explanation: "This describes the role of security and governance controls, not fine-tuning."
45
+
explanation: "This statement describes the role of security and governance controls, not fine-tuning."
46
46
- content: "It involves connecting a language model to an organization's proprietary database."
47
47
isCorrect: false
48
-
explanation: "This describes the function of Retrieval-Augmented Generation, not fine-tuning."
49
-
- content: "It involves further training a pre-trained model on a task-specific dataset to make it more suitable for a particular application."
48
+
explanation: "This statement describes the function of Retrieval-Augmented Generation, not fine-tuning."
49
+
- content: "It involves further training a pretrained model on a task-specific dataset to make it more suitable for a particular application."
50
50
isCorrect: true
51
51
explanation: "Fine-tuning allows the model to specialize and perform better at specific tasks that require domain-specific knowledge."
52
52
- content: "What are the four stages in the process of developing and implementing a plan for responsible AI when using generative models according to Microsoft's guidance?"
@@ -56,9 +56,9 @@ quiz:
56
56
explanation: "While it's important to identify and enhance benefits, the focus should be on identifying, measuring, and mitigating potential harms."
57
57
- content: "Identify potential harms, Measure these harms, Mitigate the harms, Operate the solution responsibly"
58
58
isCorrect: true
59
-
explanation: "These are the four stages in the process of developing and implementing a plan for responsible AI when using generative models."
59
+
explanation: "These stages are the four stages in the process of developing and implementing a plan for responsible AI when using generative models."
60
60
- content: "Define the problem, Design the solution, Develop the solution, Deploy the solution"
61
61
isCorrect: false
62
-
explanation: "These are general stages of software development, not specific to responsible AI development."
62
+
explanation: "These stages are general stages of software development, not specific to responsible AI development."
Copy file name to clipboardExpand all lines: learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/3-language-models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
Over the last decades, multiple developments in the field of **natural language processing** (**NLP**) have resulted in achieving **large language models** (**LLMs**). The development and availability of language models led to new ways to interact with applications and systems, such as through generative AI assistants and agents. There are a few key conepts to understand about modern language models:
1
+
Over the last decades, multiple developments in the field of **natural language processing** (**NLP**) have resulted in achieving **large language models** (**LLMs**). The development and availability of language models led to new ways to interact with applications and systems, such as through generative AI assistants and agents. There are a few key concepts to understand about modern language models:
2
2
3
3
- How they *read*
4
4
- How they understand the relationship between words
0 commit comments