Skip to content

Commit 3cf593f

Browse files
author
Sherry Yang
committed
Make naming changes for refresh.
1 parent 51098fa commit 3cf593f

File tree

18 files changed

+56
-178
lines changed

18 files changed

+56
-178
lines changed

learn-pr/wwl-data-ai/ai-information-extraction/index.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
### YamlMime:Module
22
uid: learn.wwl.ai-information-extraction
33
metadata:
4-
title: Get started with AI-powered information extraction in Azure
4+
title: Get started with AI-powered information extraction on Azure
55
description: AI gives you the power to unlock insights from your data. In this module, you'll learn how to use Azure AI services to extract information from content.
66
author: graememalcolm
77
ms.author: gmalc
@@ -11,7 +11,7 @@ metadata:
1111
ms.topic: module-standard-task-based
1212
ms.collection:
1313
- wwl-ai-copilot
14-
title: Get started with AI-powered information extraction in Azure
14+
title: Get started with AI-powered information extraction on Azure
1515
summary: AI gives you the power to unlock insights from your data. In this module, you'll learn how to use Azure AI services to extract information from content.
1616
abstract: |
1717
After completing this module, you'll be able to:

learn-pr/wwl-data-ai/fundamentals-generative-ai/6b-quality-responses.yml

Lines changed: 0 additions & 17 deletions
This file was deleted.

learn-pr/wwl-data-ai/fundamentals-generative-ai/7-exercise.yml

Lines changed: 0 additions & 17 deletions
This file was deleted.

learn-pr/wwl-data-ai/fundamentals-generative-ai/8-knowledge-check.yml

Lines changed: 0 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -38,17 +38,6 @@ quiz:
3838
- content: "Large Language Models have fewer parameters than Small Language Models."
3939
isCorrect: false
4040
explanation: "Incorrect. Large Language Models have many billions (even trillions) of parameters, which is more than Small Language Models."
41-
- content: "What is the purpose of fine-tuning in the context of generative AI?"
42-
choices:
43-
- content: "It's used to manage access, authentication, and data usage in AI models."
44-
isCorrect: false
45-
explanation: "This statement describes the role of security and governance controls, not fine-tuning."
46-
- content: "It involves connecting a language model to an organization's proprietary database."
47-
isCorrect: false
48-
explanation: "This statement describes the function of Retrieval-Augmented Generation, not fine-tuning."
49-
- content: "It involves further training a pretrained model on a task-specific dataset to make it more suitable for a particular application."
50-
isCorrect: true
51-
explanation: "Fine-tuning allows the model to specialize and perform better at specific tasks that require domain-specific knowledge."
5241
- content: "What are the four stages in the process of developing and implementing a plan for responsible AI when using generative models according to Microsoft's guidance?"
5342
choices:
5443
- content: "Identify potential benefits, Measure the benefits, Enhance the benefits, Operate the solution responsibly"

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/2-what-is-generative-ai.md

Lines changed: 1 addition & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -33,18 +33,4 @@ def add_numbers(a, b):
3333

3434
```
3535

36-
## Generative AI applications
37-
38-
Generative AI often appears as chat-based assistants that are integrated into applications to help users find information and perform tasks efficiently. One example of such an application is [Microsoft Copilot](https://copilot.microsoft.com), an AI-powered productivity tool designed to enhance your work experience by providing real-time intelligence and assistance. All generative AI assistants utilize language models. A subset of these assistants also execute programmable tasks.
39-
40-
Assistants that not only produce new content, but execute tasks such as filing taxes or coordinating shipping arrangements, just as a few examples, are known as *agents*. **Agents** are applications that can respond to user input or assess situations *autonomously*, and take appropriate actions. These actions could help with a series of tasks. For example, an "executive assistant" agent could provide details about the location of a meeting on your calendar, then attach a map or automate the booking of a taxi or rideshare service to help you get there.
41-
42-
One way to think of different generative AI applications is by grouping them in buckets. In general, you can categorize industry and personal generative AI assistants into three buckets, each requiring more customization: ready-to-use applications, extendable applications, and applications you build from the foundation.
43-
44-
- **Ready-to-use**: these applications are ready-to-use generative AI assistants. They do not require any programming work on the user's end to utilize the tool. You can start simply by asking the assistant a question.
45-
- **Extendable**: some ready-to-use applications can also be extended using your own data. These customizations enable the assistant to better support specific business processes or tasks. Microsoft Copilot is an example of technology that is ready-to-use and extendable.
46-
- **Applications you build from the foundation**: you can build your own assistants and assistants with agentic capabilities starting from a language model. Many language models exist, which we will cover later on in this module.
47-
48-
Often, you will use services to extend or build Generative AI applications. These services provide the infrastructure, tools, and frameworks necessary to develop, train, and deploy generative AI models. For example, Microsoft provides services such as Copilot Studio to extend Microsoft 365 Copilot and Microsoft Azure AI Foundry to build AI from different models.
49-
50-
Next, let's build a solid understanding of how the language models in these generative AI applications work.
36+
Next, let's build an understanding of the language models that power generative AI.

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/3a-transformers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,4 @@ The *softmax* function is used within the attention function, over the scaled do
6262

6363
The Transformer architecture uses multi-head attention, which means tokens are processed by the attention function several times in parallel. By doing so, a word or sentence can be processed multiple times, in various ways, to extract different kinds of information from the sentence.
6464

65-
The Transformer architecture has allowed us to train models in a more efficient way. Instead of processing each token in a sentence or sequence, attention allows a model to process tokens in parallel in various ways. Next, learn how different types of language models are available for building applications.
65+
The Transformer architecture has allowed us to train models in a more efficient way. Instead of processing each token in a sentence or sequence, attention allows a model to process tokens in parallel in various ways. Next, learn how different types of language models are available for generative AI.

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/3b-use-language-models.md

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,6 @@
1-
Different models exist today which mostly differ by the specific data they've been trained on, or by how they implement attention within their architectures.
1+
Today, importantly, developers do not need to train models from scratch. To build a generative AI application, you can use pre-trained models. Some language models are open-source and publicly available. Others are offered in proprietary catalogs. Different models exist today which mostly differ by the specific data they've been trained on, or by how they implement attention within their architectures. Language models power the 'app logic' component of the interaction between users and generative AI applications.
22

3-
Today, importantly, developers do not need to train models from scratch. To build a generative AI application, you can use pre-trained models. Some language models are open-source and publicly available through communities like Hugging Face. Others are offered in proprietary catalogs. For example, Azure offers the most commonly used language models as *foundation models* in the Azure AI Foundry *model catalog*. Foundation models are pretrained on large texts and can be fine-tuned for specific tasks with a relatively small dataset.
4-
5-
You can deploy a foundation model to an endpoint without any extra training. If you want the model to be specialized in a task, or perform better on domain-specific knowledge, you can also choose to fine-tune a foundation model.
6-
7-
Foundation models can be used for various tasks, including:
8-
9-
- Text classification
10-
- Token classification
11-
- Question answering
12-
- Summarization
13-
- Translation
14-
15-
To choose the foundation model that best fits your needs, you can test out different models. You can also review the data the models are trained on and possible biases and risks a model may have.
3+
![Diagram of an application.](../media/application-logic-image.png)
164

175
## Large and small language models
186
In general, language models can be considered in two categories: *Large Language Models* (LLMs) and *Small Language models* (SLMs).

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/6-writing-prompts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,4 @@ In most cases, an agent doesn't just send your prompt as-is to the language mode
1515
- The conversation history for the current session, including past prompts and responses. The history enables you to refine the response iteratively while maintaining the context of the conversation.
1616
- The current prompt – potentially optimized by the agent to reword it appropriately for the model or to add more grounding data to scope the response.
1717

18-
The term *prompt engineering* describes the process of prompt improvement. Both developers who design applications and consumers who use those applications can improve the quality of responses from generative AI by considering prompt engineering. Next, take a look at other methods that are utilized by developers to improve the quality of responses.
18+
The term *prompt engineering* describes the process of prompt improvement. Both developers who design applications and consumers who use those applications can improve the quality of responses from generative AI by considering prompt engineering.

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/6b-quality-responses.md

Lines changed: 0 additions & 19 deletions
This file was deleted.

learn-pr/wwl-data-ai/fundamentals-generative-ai/includes/7-exercise.md

Lines changed: 0 additions & 6 deletions
This file was deleted.

0 commit comments

Comments
 (0)