You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Tutorial: Part 2 - Build a custom knowledge retrieval (RAG) app with the Azure AI Foundry SDK
17
17
18
-
In this tutorial, you use the Azure AI Foundry SDK (and other libraries) to build, configure, evaluate, and deploy a chat app for your retail company called Contoso Trek. Your retail company specializes in outdoor camping gear and clothing. The chat app should answer questions about your products and services. For example, the chat app can answer questions such as "which tent is the most waterproof?" or "what is the best sleeping bag for cold weather?".
18
+
In this tutorial, you use the Azure AI Foundry SDK (and other libraries) to build, configure, and evaluate a chat app for your retail company called Contoso Trek. Your retail company specializes in outdoor camping gear and clothing. The chat app should answer questions about your products and services. For example, the chat app can answer questions such as "which tent is the most waterproof?" or "what is the best sleeping bag for cold weather?".
19
19
20
20
This part two shows you how to enhance a basic chat application by adding [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) to ground the responses in your custom data. Retrieval Augmented Generation (RAG) is a pattern that uses your data with a large language model (LLM) to generate answers specific to your data. In this part two, you learn how to:
21
21
22
22
> [!div class="checklist"]
23
+
> - Get example data
23
24
> - Create a search index of the data for the chat app to use
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-create-resources.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ author: sdgilley
17
17
# Tutorial: Part 1 - Set up project and development environment to build a custom knowledge retrieval (RAG) app with the Azure AI Foundry SDK
18
18
19
19
20
-
In this tutorial, you use the Azure AI Foundry SDK (and other libraries) to build, configure, evaluate, and deploy a chat app for your retail company called Contoso Trek. Your retail company specializes in outdoor camping gear and clothing. The chat app should answer questions about your products and services. For example, the chat app can answer questions such as "which tent is the most waterproof?" or "what is the best sleeping bag for cold weather?".
20
+
In this tutorial, you use the Azure AI Foundry SDK (and other libraries) to build, configure, and evaluate a chat app for your retail company called Contoso Trek. Your retail company specializes in outdoor camping gear and clothing. The chat app should answer questions about your products and services. For example, the chat app can answer questions such as "which tent is the most waterproof?" or "what is the best sleeping bag for cold weather?".
21
21
22
22
This tutorial is part one of a three-part tutorial. This part one gets you ready to write code in part two and evaluate your chat app in part three. In this part, you:
Copy file name to clipboardExpand all lines: articles/ai-studio/tutorials/copilot-sdk-evaluate.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,11 +16,12 @@ author: sdgilley
16
16
17
17
# Tutorial: Part 3 - Evaluate a custom chat application with the Azure AI Foundry SDK
18
18
19
-
In this tutorial, you use the Azure AI SDK (and other libraries) to evaluate and deploy the chat app you built in [Part 2 of the tutorial series](copilot-sdk-build-rag.md). In this part three, you learn how to:
19
+
In this tutorial, you use the Azure AI SDK (and other libraries) to evaluate the chat app you built in [Part 2 of the tutorial series](copilot-sdk-build-rag.md). In this part three, you learn how to:
20
20
21
21
> [!div class="checklist"]
22
22
> - Create an evaluation dataset
23
23
> - Evaluate the chat app with Azure AI evaluators
24
+
> - Iterate and improve your app
24
25
25
26
26
27
This tutorial is part three of a three-part tutorial.
Copy file name to clipboardExpand all lines: articles/machine-learning/component-reference/score-vowpal-wabbit-model.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -67,7 +67,7 @@ This section contains implementation details, tips, and answers to frequently as
67
67
68
68
Vowpal Wabbit has many command-line options for choosing and tuning algorithms. A full discussion of these options is not possible here; we recommend that you view the [Vowpal Wabbit wiki page](https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments).
69
69
70
-
The following parameters are not supported in Azure Machine Learning Studio (classic).
70
+
The following parameters are not supported in Azure Machine Learning studio (classic).
71
71
72
72
- The input/output options specified in [https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments](https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments)
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-hub-workspace.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,11 +33,11 @@ In the transition from proving feasibility of an idea, to a funded project, many
33
33
34
34
The goal of hubs is to take away this bottleneck, by letting IT set up a secure, preconfigured, and reusable environment for a team to prototype, build, and operate machine learning models.
35
35
36
-
## Interoperability between ML studio and AI studio
36
+
## Interoperability between ML studio and AI Foundry
37
37
38
-
Hubs can be used as your team's collaboration environment for both ML studio and [AI studio](/azure/ai-studio/what-is-ai-studio). Use ML Studio for training and operationalizing custom machine learning models. Use AI studio as experience for building and operating AI applications responsibly.
38
+
Hubs can be used as your team's collaboration environment for both ML studio and [AI Foundry](/azure/ai-studio/what-is-ai-studio). Use ML Studio for training and operationalizing custom machine learning models. Use AI Foundry as experience for building and operating AI applications responsibly.
39
39
40
-
| Workspace Kind | ML Studio | AI Studio|
40
+
| Workspace Kind | ML Studio | AI Foundry|
41
41
| --- | --- | --- |
42
42
| Default | Supported | - |
43
43
| Hub | Supported | Supported |
@@ -54,7 +54,7 @@ Project workspaces that are created using a hub obtain the hub's security settin
54
54
| Network settings | One [managed virtual network](how-to-managed-network.md) is shared between hub and project workspaces. To access content in the hub and project workspaces, create a single private link endpoint on the hub workspace. |
55
55
| Encryption settings | Encryption settings pass down from hub to project. |
56
56
| Storage for encrypted data | When you bring your customer-managed keys for encryption, hub and project workspaces share the same managed resource group for storing encrypted service data. |
57
-
| Connections | Project workspaces can consume shared connections created on the hub. This feature is currently only supported in [AI studio]()|
57
+
| Connections | Project workspaces can consume shared connections created on the hub. This feature is currently only supported in [AI Foundry]()|
58
58
| Compute instance | Reuse a compute instance across all project workspaces associated to the same hub. |
59
59
| Compute quota | Any compute quota consumed by project workspaces is deducted from the hub workspace quota balance. |
60
60
| Storage | Associated resource for storing workspace data. Project workspaces use designated containers starting with a prefix {workspaceGUID}, and have a conditional [Azure Attribute Based Access](/azure/role-based-access-control/conditions-overview) role assignment for the workspace identity for accessing these containers only. |
@@ -69,7 +69,7 @@ Data that is uploaded in one project workspace, is stored in isolation from data
69
69
Once a hub is created, there are multiple ways to create a project workspace using it:
70
70
71
71
1.[Using ML Studio](how-to-manage-workspace.md?tabs=mlstudio)
72
-
1.[Using AI Studio](/azure/ai-studio/how-to/create-projects)
72
+
1.[Using AI Foundry](/azure/ai-studio/how-to/create-projects)
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-model-catalog.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -118,19 +118,19 @@ Models that are available for deployment as serverless APIs with pay-as-you-go b
118
118
119
119
### Pay for model usage in MaaS
120
120
121
-
The discovery, subscription, and consumption experience for models deployed via MaaS is in the Azure AI Studio and Azure Machine Learning studio. Users accept license terms for use of the models, and pricing information for consumption is provided during deployment. Models from third party providers are billed through Azure Marketplace, in accordance with the [Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms); models from Microsoft are billed using Azure meters as First Party Consumption Services. As described in the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage), First Party Consumption Services are purchased using Azure meters but aren't subject to Azure service terms; use of these models is subject to the license terms provided.
121
+
The discovery, subscription, and consumption experience for models deployed via MaaS is in the Azure AI Foundry portal and Azure Machine Learning studio. Users accept license terms for use of the models, and pricing information for consumption is provided during deployment. Models from third party providers are billed through Azure Marketplace, in accordance with the [Commercial Marketplace Terms of Use](/legal/marketplace/marketplace-terms); models from Microsoft are billed using Azure meters as First Party Consumption Services. As described in the [Product Terms](https://www.microsoft.com/licensing/terms/welcome/welcomepage), First Party Consumption Services are purchased using Azure meters but aren't subject to Azure service terms; use of these models is subject to the license terms provided.
122
122
123
123
### Deploy models for inference through MaaS
124
124
125
125
Deploying a model through MaaS allows users to get access to ready to use inference APIs without the need to configure infrastructure or provision GPUs, saving engineering time and resources. These APIs can be integrated with several LLM tools and usage is billed as described in the previous section.
126
126
127
127
### Fine-tune models through MaaS with Pay-as-you-go
128
128
129
-
For models that are available through MaaS and support fine-tuning, users can take advantage of hosted fine-tuning with pay-as-you-go billing to tailor the models using data they provide. For more information, see [fine-tune a Llama 2 model](/azure/ai-studio/how-to/fine-tune-model-llama) in Azure AI Studio.
129
+
For models that are available through MaaS and support fine-tuning, users can take advantage of hosted fine-tuning with pay-as-you-go billing to tailor the models using data they provide. For more information, see [fine-tune a Llama 2 model](/azure/ai-studio/how-to/fine-tune-model-llama) in Azure AI Foundry portal.
130
130
131
131
### RAG with models deployed through MaaS
132
132
133
-
Azure AI Studio enables users to make use of Vector Indexes and Retrieval Augmented Generation. Models that can be deployed as serverless APIs can be used to generate embeddings and inferencing based on custom data to generate answers specific to their use case. For more information, see [Retrieval augmented generation and indexes](concept-retrieval-augmented-generation.md).
133
+
Azure AI Foundry enables users to make use of Vector Indexes and Retrieval Augmented Generation. Models that can be deployed as serverless APIs can be used to generate embeddings and inferencing based on custom data to generate answers specific to their use case. For more information, see [Retrieval augmented generation and indexes](concept-retrieval-augmented-generation.md).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-create-compute-instance.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -438,7 +438,7 @@ Assigned to user does not need compute write (create) permission to enable SSO.
438
438
439
439
Here are the steps assigned to user needs to take. Please note creator of compute instance is not allowed to enable SSO on that compute instance due to security reasons.
440
440
441
-
1. Click on compute in left navigation pane in Azure Machine Learning Studio.
441
+
1. Click on compute in left navigation pane in Azure Machine Learning studio.
442
442
1. Click on the name of compute instance where you need to enable SSO.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-custom-dns.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ When using an Azure Machine Learning workspace (including Azure AI hubs) with a
25
25
- An Azure Virtual Network that uses [your own DNS server](/azure/virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances#name-resolution-that-uses-your-own-dns-server).
26
26
27
27
:::moniker range="azureml-api-2"
28
-
- An Azure Machine Learning workspace with a private endpoint, including hub workspaces such as those used by Azure AI Studio. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
28
+
- An Azure Machine Learning workspace with a private endpoint, including hub workspaces such as those used by Azure AI Foundry. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
29
29
30
30
- If your workspace dependency resources are secured with an __Azure Virtual network__, familiarity with the [Network isolation during training & inference](./how-to-network-security-overview.md) article.
31
31
:::moniker-end
@@ -57,7 +57,7 @@ Another option is to modify the `hosts` file on the client that is connecting to
57
57
Access to a given Azure Machine Learning workspace via Private Link is done by communicating with the following Fully Qualified Domains (called the workspace FQDNs) listed below:
58
58
59
59
> [!IMPORTANT]
60
-
> If you are using a hub workspace (including Azure AI Studio hub), then you will have addtional entries for each project workspace created from the hub.
60
+
> If you are using a hub workspace (including Azure AI Foundry hub), then you will have additional entries for each project workspace created from the hub.
61
61
62
62
**Azure Public regions**:
63
63
-```<per-workspace globally-unique identifier>.workspace.<region the workspace was created in>.api.azureml.ms```
0 commit comments