Skip to content

Commit 9207e98

Browse files
authored
Merge pull request #3100 from sdgilley/sdg-freshness-update
Freshness pass - connections.md & fine-tuning-overview.md
2 parents 600af69 + 9642674 commit 9207e98

File tree

2 files changed

+34
-30
lines changed

2 files changed

+34
-30
lines changed

articles/ai-studio/concepts/connections.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ ms.custom:
99
- build-2024
1010
- ignite-2024
1111
ms.topic: conceptual
12-
ms.date: 11/21/2024
13-
ms.reviewer: sgilley
12+
ms.date: 02/21/2025
13+
ms.reviewer: meerakurup
1414
ms.author: sgilley
1515
author: sdgilley
1616
---
@@ -31,30 +31,30 @@ As another example, you can [create a connection](../how-to/connections-add.md)
3131

3232
## Connections to non-Microsoft services
3333

34-
Azure AI Foundry supports connections to non-Microsoft services, including the following:
35-
- The [API key connection](../how-to/connections-add.md) handles authentication to your specified target on an individual basis. This is the most common non-Microsoft connection type.
36-
- The [custom connection](../how-to/connections-add.md) allows you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you'll have to manage authentication on your own.
34+
Azure AI Foundry supports connections to non-Microsoft services, including:
35+
- The [API key connection](../how-to/connections-add.md) handles authentication to your specified target on an individual basis. API key is the most common non-Microsoft connection type.
36+
- The [custom connection](../how-to/connections-add.md) allows you to securely store and access keys while storing related properties, such as targets and versions. Custom connections are useful when you have many targets that or cases where you wouldn't need a credential to access. LangChain scenarios are a good example where you would use custom service connections. Custom connections don't manage authentication, so you have to manage authentication on your own.
3737

3838
## Connections to datastores
3939

4040
> [!IMPORTANT]
41-
> Data connections cannot be shared across projects. They are created exclusively in the context of one project.
41+
> Data connections can't be shared across projects. They're created exclusively in the context of one project.
4242
4343
Creating a data connection allows you to access external data without copying it to your project. Instead, the connection provides a reference to the data source.
4444

4545
A data connection offers these benefits:
4646

4747
- A common, easy-to-use API that interacts with different storage types including Microsoft OneLake, Azure Blob, and Azure Data Lake Gen2.
4848
- Easier discovery of useful connections in team operations.
49-
- For credential-based access (service principal/SAS/key), Azure AI Foundry connection secures credential information. This way, you won't need to place that information in your scripts.
49+
- Credential-based access (service principal/SAS/key). Azure AI Foundry connection secures credential information so you don't need to place that information in your scripts.
5050

5151
When you create a connection with an existing Azure storage account, you can choose between two different authentication methods:
5252

5353
- **Credential-based**: Authenticate data access with a service principal, shared access signature (SAS) token, or account key. Users with *Reader* project permissions can access the credentials.
5454
- **Identity-based**: Use your Microsoft Entra ID or managed identity to authenticate data access.
5555

5656
> [!TIP]
57-
> When using an identity-based connection, Azure role-based access control (Azure RBAC) is used to determine who can access the connection. You must assign the correct Azure RBAC roles to your developers before they can use the connection. For more information, see [Scenario: Connections using Microsoft Entra ID](rbac-ai-studio.md#scenario-connections-using-microsoft-entra-id-authentication).
57+
> When you use an identity-based connection, Azure role-based access control (Azure RBAC) determines who can access the connection. You must assign the correct Azure RBAC roles to your developers before they can use the connection. For more information, see [Scenario: Connections using Microsoft Entra ID](rbac-ai-studio.md#scenario-connections-using-microsoft-entra-id-authentication).
5858
5959

6060
The following table shows the supported Azure cloud-based storage services and authentication methods:
@@ -82,7 +82,7 @@ A Uniform Resource Identifier (URI) represents a storage location on your local
8282
8383
## Key vaults and secrets
8484

85-
Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on a hub level (link to connection rbac).
85+
Connections allow you to securely store credentials, authenticate access, and consume data and information. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards. As an administrator, you can audit both shared and project-scoped connections on a hub level.
8686

8787
Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. Azure AI Foundry connections store API keys securely, as secrets, in a key vault. The key vault [Azure role-based access control (Azure RBAC)](./rbac-ai-studio.md) controls access to these connection resources. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they're stored in the hub's key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you avoid credential storage in a YAML file, because a security breach could lead to a credential leak.
8888

articles/ai-studio/concepts/fine-tuning-overview.md

Lines changed: 25 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,25 @@
11
---
22
title: Fine-tuning in Azure AI Foundry portal
33
titleSuffix: Azure AI Foundry
4-
description: This article introduces fine-tuning of models in Azure AI Foundry portal.
4+
description: This article explains what fine-tuning is and under what circumstances you should consider doing it.
55
manager: scottpolly
66
ms.service: azure-ai-foundry
77
ms.custom:
88
- build-2024
99
- code01
10-
ms.topic: conceptual
11-
ms.date: 10/31/2024
12-
ms.reviewer: sgilley
10+
ms.topic: concept-article
11+
ms.date: 02/21/2025
12+
ms.reviewer: keli19
1313
ms.author: sgilley
1414
author: sdgilley
15+
#customer intent: As a developer, I want to learn what it means to fine-tune a model.
1516
---
1617

1718
# Fine-tune models with Azure AI Foundry
1819

19-
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
20+
Fine-tuning customizes a pretrained AI model with additional training on a specific task or dataset to improve performance, add new skills, or enhance accuracy. The result is a new, optimized GenAI model based on the provided examples.
2021

21-
Fine-tuning refers to customizing a pre-trained generative AI model with additional training on a specific task or new dataset for enhanced performance, new skills, or improved accuracy. The result is a new, custom GenAI model that's optimized based on the provided examples.
22+
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
2223

2324
Consider fine-tuning GenAI models to:
2425
- Scale and adapt to specific enterprise needs
@@ -27,19 +28,20 @@ Consider fine-tuning GenAI models to:
2728
- Save time and resources with faster and more precise results
2829
- Get more relevant and context-aware outcomes as models are fine-tuned for specific use cases
2930

30-
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and selecting the model card to learn detailed information about each model. Specific models may be subjected to regional constraints, [view this list for more details](#supported-models-for-fine-tuning).
31+
[Azure AI Foundry](https://ai.azure.com) offers several models across model providers enabling you to get access to the latest and greatest in the market. You can discover supported models for fine-tuning through our model catalog by using the **Fine-tuning tasks** filter and selecting the model card to learn detailed information about each model. Specific models might be subjected to regional constraints. [View this list for more details](#supported-models-for-fine-tuning).
3132

3233
:::image type="content" source="../media/concepts/model-catalog-fine-tuning.png" alt-text="Screenshot of Azure AI Foundry model catalog and filtering by Fine-tuning tasks." lightbox="../media/concepts/model-catalog-fine-tuning.png":::
3334

34-
This article will walk you through use-cases for fine-tuning and how this can help you in your GenAI journey.
35+
This article walks you through use-cases for fine-tuning and how it helps you in your GenAI journey.
3536

3637
## Getting started with fine-tuning
3738

3839
When starting out on your generative AI journey, we recommend you begin with prompt engineering and RAG to familiarize yourself with base models and its capabilities.
3940
- [Prompt engineering](../../ai-services/openai/concepts/prompt-engineering.md) is a technique that involves designing prompts using tone and style details, example responses, and intent mapping for natural language processing models. This process improves accuracy and relevancy in responses, to optimize the performance of the model.
4041
- [Retrieval-augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) improves LLM performance by retrieving data from external sources and incorporating it into a prompt. RAG can help businesses achieve customized solutions while maintaining data relevance and optimizing costs.
4142

42-
As you get comfortable and begin building your solution, it's important to understand where prompt engineering falls short and that will help you realize if you should try fine-tuning.
43+
As you get comfortable and begin building your solution, it's important to understand where prompt engineering falls short and when you should try fine-tuning.
44+
4345
- Is the base model failing on edge cases or exceptions?
4446
- Is the base model not consistently providing output in the right format?
4547
- Is it difficult to fit enough examples in the context window to steer the model?
@@ -53,26 +55,29 @@ _A customer wants to use GPT-3.5 Turbo to turn natural language questions into q
5355

5456
### Use cases
5557

56-
Base models are already pre-trained on vast amounts of data and most times you'll add instructions and examples to the prompt to get the quality responses that you're looking for - this process is called "few-shot learning". Fine-tuning allows you to train a model with many more examples that you can tailor to meet your specific use-case, thus improving on few-shot learning. This can reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
58+
Base models are already pretrained on vast amounts of data. Most times you add instructions and examples to the prompt to get the quality responses that you're looking for - this process is called "few-shot learning." Fine-tuning allows you to train a model with many more examples that you can tailor to meet your specific use-case, thus improving on few-shot learning. Fine-tuning can reduce the number of tokens in the prompt leading to potential cost savings and requests with lower latency.
59+
60+
Turning natural language into a query language is just one use case where you can "_show not tell_" the model how to behave. Here are some other use cases:
5761

58-
Turning natural language into a query language is just one use case where you can _show not tell_ the model how to behave. Here are some additional use cases:
5962
- Improve the model's handling of retrieved data
6063
- Steer model to output content in a specific style, tone, or format
6164
- Improve the accuracy when you look up information
6265
- Reduce the length of your prompt
63-
- Teach new skills (i.e. natural language to code)
66+
- Teach new skills (that is, natural language to code)
6467

65-
If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model. But there may be a higher upfront cost to training, and you have to pay for hosting your own custom model.
68+
If you identify cost as your primary motivator, proceed with caution. Fine-tuning might reduce costs for certain use cases by shortening prompts or allowing you to use a smaller model. But there might be a higher upfront cost to training, and you have to pay for hosting your own custom model.
6669

6770
### Steps to fine-tune a model
71+
6872
Here are the general steps to fine-tune a model:
69-
1. Based on your use case, choose a model that supports your task
70-
2. Prepare and upload training data
71-
3. (Optional) Prepare and upload validation data
72-
4. (Optional) Configure task parameters
73-
5. Train your model.
74-
6. Once completed, review metrics and evaluate model. If the results don't meet your benchmark, then go back to step 2.
75-
7. Use your fine-tuned model
73+
74+
1. Choose a model that supports your task.
75+
1. Prepare and upload training data.
76+
1. (Optional) Prepare and upload validation data.
77+
1. (Optional) Configure task parameters.
78+
1. Train your model.
79+
1. Once completed, review metrics and evaluate model. If the results don't meet your benchmark, then go back to step 2.
80+
1. Use your fine-tuned model.
7681

7782
It's important to call out that fine-tuning is heavily dependent on the quality of data that you can provide. It's best practice to provide hundreds, if not thousands, of training examples to be successful and get your desired results.
7883

@@ -87,7 +92,6 @@ For more information on fine-tuning using a managed compute (preview), see [Fine
8792

8893
For details about Azure OpenAI models that are available for fine-tuning, see the [Azure OpenAI Service models documentation](../../ai-services/openai/concepts/models.md#fine-tuning-models) or the [Azure OpenAI models table](#fine-tuning-azure-openai-models) later in this guide.
8994

90-
9195
For the Azure OpenAI Service models that you can fine tune, supported regions for fine-tuning include North Central US, Sweden Central, and more.
9296

9397
### Fine-tuning Azure OpenAI models

0 commit comments

Comments
 (0)