Skip to content

Commit 4eb6839

Browse files
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into adls-dev
2 parents dfbe25c + d6de85d commit 4eb6839

File tree

254 files changed

+2492
-3782
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

254 files changed

+2492
-3782
lines changed

.openpublishing.redirection.azure-monitor.json

Lines changed: 48 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4131,7 +4131,7 @@
41314131
},
41324132
{
41334133
"source_path_from_root": "/articles/azure-monitor/platform/alerts-using-migration-tool.md",
4134-
"redirect_url": "/azure/azure-monitor/alerts/alerts-using-migration-tool",
4134+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-using-migration-tool",
41354135
"redirect_document_id": false
41364136
},
41374137
{
@@ -6259,6 +6259,46 @@
62596259
"redirect_url": "/previous-versions/azure/azure-monitor/autoscale/tutorial-autoscale-performance-schedule",
62606260
"redirect_document_id": false
62616261
},
6262+
{
6263+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-automatic-migration.md",
6264+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-automatic-migration",
6265+
"redirect_document_id": false
6266+
},
6267+
{
6268+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-classic.overview.md",
6269+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-classic.overview",
6270+
"redirect_document_id": false
6271+
},
6272+
{
6273+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-classic-portal.md",
6274+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-classic-portal",
6275+
"redirect_document_id": false
6276+
},
6277+
{
6278+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-enable-template.md",
6279+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-enable-template",
6280+
"redirect_document_id": false
6281+
},
6282+
{
6283+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-prepare-migration.md",
6284+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-prepare-migration",
6285+
"redirect_document_id": false
6286+
},
6287+
{
6288+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-understand-migration.md",
6289+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-understand-migration",
6290+
"redirect_document_id": false
6291+
},
6292+
{
6293+
"source_path_from_root": "/articles/azure-monitor/alerts/alerts-webhooks.md",
6294+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/alerts-webhooks",
6295+
"redirect_document_id": false
6296+
},
6297+
{
6298+
"source_path_from_root": "/articles/azure-monitor/alerts/api-alerts.md",
6299+
"redirect_url": "/previous-versions/azure/azure-monitor/alerts/api-alerts",
6300+
"redirect_document_id": false
6301+
},
62626302
{
62636303
"source_path_from_root": "/articles/azure-monitor/essentials/metrics-supported.md",
62646304
"redirect_url": "/azure/azure-monitor/reference/supported-metrics/metrics-index",
@@ -6621,7 +6661,12 @@
66216661
},
66226662
{
66236663
"source_path_from_root": "/articles/azure-monitor/monitor-reference.md",
6624-
"redirect_url": "/azure/azure-monitor/overview",
6664+
"redirect_url": "/azure/azure-monitor/monitor-azure-monitor-reference",
6665+
"redirect_document_id": false
6666+
},
6667+
{
6668+
"source_path_from_root": "/articles/azure-monitor/azure-monitor-monitoring-reference.md",
6669+
"redirect_url": "/azure/azure-monitor/monitor-azure-monitor-reference",
66256670
"redirect_document_id": false
66266671
},
66276672
{
@@ -6644,5 +6689,6 @@
66446689
"redirect_url": "/azure/azure-monitor/essentials/data-collection-rule-create-edit?tabs=arm#manually-create-a-dcr",
66456690
"redirect_document_id": false
66466691
}
6692+
66476693
]
66486694
}
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
{
2+
"redirections": [
3+
{
4+
"source_path_from_root": "/articles/private-multi-access-edge-compute-mec/affirmed-private-network-service-overview.md ",
5+
"redirect_url": "/azure/private-multi-access-edge-compute-mec/overview",
6+
"redirect_document_id": false
7+
},
8+
{
9+
"source_path_from_root": "/articles/private-multi-access-edge-compute-mec/deploy-affirmed-private-network-service-solution.md ",
10+
"redirect_url": "/azure/private-multi-access-edge-compute-mec/overview",
11+
"redirect_document_id": false
12+
13+
}
14+
]
15+
}

articles/advisor/advisor-resiliency-reviews.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,8 +61,8 @@ You can manage access to Advisor personalized recommendations using the followin
6161
| **Name** | **Description** |
6262
|---|:---:|
6363
|Subscription Reader|View reviews for a workload and recommendations linked to them.|
64-
|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage review recommendation lifecycle.|
65-
|Advisor Recommendations Contributor (Assessments and Reviews)|View review recommendations, accept review recommendations, manage review recommendations' lifecycle.|
64+
|Subscription Owner<br>Subscription Contributor|View reviews for a workload, triage recommendations linked to those reviews, manage the recommendation lifecycle.|
65+
|Advisor Recommendations Contributor (Assessments and Reviews)|View accepted recommendations, and manage the recommendation lifecycle.|
6666

6767
You can find detailed instructions on how to assign a role using the Azure portal - [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition). Additional information is available in [Steps to assign an Azure role - Azure RBAC](/azure/role-based-access-control/role-assignments-steps).
6868

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
---
2+
title: Azure OpenAI Service getting started with customizing a large language model (LLM)
3+
titleSuffix: Azure OpenAI Service
4+
description: Learn more about the concepts behind customizing an LLM with Azure OpenAI.
5+
ms.topic: conceptual
6+
ms.date: 03/26/2024
7+
ms.service: azure-ai-openai
8+
manager: nitinme
9+
author: mrbullwinkle
10+
ms.author: mbullwin
11+
recommendations: false
12+
---
13+
14+
# Getting started with customizing a large language model (LLM)
15+
16+
There are several techniques for adapting a pre-trained language model to suit a specific task or domain. These include prompt engineering, RAG (Retrieval Augmented Generation), and fine-tuning. These three techniques are not mutually exclusive but are complementary methods that in combination can be applicable to a specific use case. In this article, we'll explore these techniques, illustrative use cases, things to consider, and provide links to resources to learn more and get started with each.
17+
18+
## Prompt engineering
19+
20+
### Definition
21+
22+
[Prompt engineering](./prompt-engineering.md) is a technique that is both art and science, which involves designing prompts for generative AI models. This process utilizes in-context learning ([zero shot and few shot](./prompt-engineering.md#examples)) and, with iteration, improves accuracy and relevancy in responses, optimizing the performance of the model.
23+
24+
### Illustrative use cases
25+
26+
A Marketing Manager at an environmentally conscious company can use prompt engineering to help guide the model to generate descriptions that are more aligned with their brand’s tone and style. For instance, they can add a prompt like "Write a product description for a new line of eco-friendly cleaning products that emphasizes quality, effectiveness, and highlights the use of environmentally friendly ingredients" to the input. This will help the model generate descriptions that are aligned with their brand’s values and messaging.
27+
28+
### Things to consider
29+
30+
- **Prompt engineering** is the starting point for generating desired output from generative AI models.
31+
32+
- **Craft clear instructions**: Instructions are commonly used in prompts and guide the model's behavior. Be specific and leave as little room for interpretation as possible. Use analogies and descriptive language to help the model understand your desired outcome.
33+
34+
- **Experiment and iterate**: Prompt engineering is an art that requires experimentation and iteration. Practice and gain experience in crafting prompts for different tasks. Every model might behave differently, so it's important to adapt prompt engineering techniques accordingly.
35+
36+
### Getting started
37+
38+
- [Introduction to prompt engineering](./prompt-engineering.md)
39+
- [Prompt engineering techniques](./advanced-prompt-engineering.md)
40+
- [15 tips to become a better prompt engineer for generative AI](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/15-tips-to-become-a-better-prompt-engineer-for-generative-ai/ba-p/3882935)
41+
- [The basics of prompt engineering (video)](https://www.youtube.com/watch?v=e7w6QV1NX1c)
42+
43+
## RAG (Retrieval Augmented Generation)
44+
45+
### Definition
46+
47+
[RAG (Retrieval Augmented Generation)](../../../ai-studio/concepts/retrieval-augmented-generation.md) is a method that integrates external data into a Large Language Model prompt to generate relevant responses. This approach is particularly beneficial when using a large corpus of unstructured text based on different topics. It allows for answers to be grounded in the organization’s knowledge base (KB), providing a more tailored and accurate response.
48+
49+
RAG is also advantageous when answering questions based on an organization’s private data or when the public data that the model was trained on might have become outdated. This helps ensure that the responses are always up-to-date and relevant, regardless of the changes in the data landscape.
50+
51+
### Illustrative use case
52+
53+
A corporate HR department is looking to provide an intelligent assistant that answers specific employee health insurance related questions such as "are eyeglasses covered?" RAG is used to ingest the extensive and numerous documents associated with insurance plan policies to enable the answering of these specific types of questions.
54+
55+
### Things to consider
56+
57+
- RAG helps ground AI output in real-world data and reduces the likelihood of fabrication.
58+
59+
- RAG is helpful when there is a need to answer questions based on private proprietary data.
60+
61+
- RAG is helpful when you might want questions answered that are recent (for example, before the cutoff date of when the [model version](./models.md) was last trained).
62+
63+
### Getting started
64+
65+
- [Retrieval Augmented Generation in Azure AI Studio - Azure AI Studio | Microsoft Learn](../../../ai-studio/concepts/retrieval-augmented-generation.md)
66+
- [Retrieval Augmented Generation (RAG) in Azure AI Search](../../../search/retrieval-augmented-generation-overview.md)
67+
- [Retrieval Augmented Generation using Azure Machine Learning prompt flow (preview)](../../../machine-learning/concept-retrieval-augmented-generation.md)
68+
69+
## Fine-tuning
70+
71+
### Definition
72+
73+
[Fine-tuning](../how-to/fine-tuning.md), specifically [supervised fine-tuning](https://techcommunity.microsoft.com/t5/ai-azure-ai-services-blog/fine-tuning-now-available-with-azure-openai-service/ba-p/3954693?lightbox-message-images-3954693=516596iC5D02C785903595A) in this context, is an iterative process that adapts an existing large language model to a provided training set in order to improve performance, teach the model new skills, or reduce latency. This approach is used when the model needs to learn and generalize over specific topics, particularly when these topics are generally small in scope.
74+
75+
Fine-tuning requires the use of high-quality training data, in a [special example based format](../how-to/fine-tuning.md#example-file-format), to create the new fine-tuned Large Language Model. By focusing on specific topics, fine-tuning allows the model to provide more accurate and relevant responses within those areas of focus.
76+
77+
### Illustrative use case
78+
79+
An IT department has been using GPT-4 to convert natural language queries to SQL, but they have found that the responses are not always reliably grounded in their schema, and the cost is prohibitively high.
80+
81+
They fine-tune GPT-3.5-Turbo with hundreds of requests and correct responses and produce a model that performs better than the base model with lower costs and latency.
82+
83+
### Things to consider
84+
85+
- Fine-tuning is an advanced capability; it enhances LLM with after-cutoff-date knowledge and/or domain specific knowledge. Start by evaluating the baseline performance of a standard model against their requirements before considering this option.
86+
87+
- Having a baseline for performance without fine-tuning is essential for knowing whether fine-tuning has improved model performance. Fine-tuning with bad data makes the base model worse, but without a baseline, it's hard to detect regressions.
88+
89+
- Good cases for fine-tuning include steering the model to output content in a specific and customized style, tone, or format, or tasks where the information needed to steer the model is too long or complex to fit into the prompt window.
90+
91+
- Fine-tuning costs:
92+
93+
- Fine-tuning can reduce costs across two dimensions: (1) by using fewer tokens depending on the task (2) by using a smaller model (for example GPT 3.5 Turbo can potentially be fine-tuned to achieve the same quality of GPT-4 on a particular task).
94+
95+
- Fine-tuning has upfront costs for training the model. And additional hourly costs for hosting the custom model once it's deployed.
96+
97+
### Getting started
98+
99+
- [When to use Azure OpenAI fine-tuning](./fine-tuning-considerations.md)
100+
- [Customize a model with fine-tuning](../how-to/fine-tuning.md)
101+
- [Azure OpenAI GPT 3.5 Turbo fine-tuning tutorial](../tutorials/fine-tune.md)
102+
- [To fine-tune or not to fine-tune? (Video)](https://www.youtube.com/watch?v=0Jo-z-MFxJs)

0 commit comments

Comments
 (0)