Skip to content

Commit 4f356d9

Browse files
committed
Split monitor and online eval
1 parent 6dbb0a5 commit 4f356d9

File tree

3 files changed

+76
-42
lines changed

3 files changed

+76
-42
lines changed
Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,70 @@
1+
---
2+
title: Continuously Monitor your Generative AI Applications
3+
titleSuffix: Azure AI Foundry
4+
description: This article provides instructions on how to continuously monitor Generative AI Applications.
5+
manager: scottpolly
6+
ms.service: azure-ai-studio
7+
ms.custom:
8+
- build-2024
9+
ms.topic: how-to
10+
ms.date: 11/19/2024
11+
ms.reviewer: alehughes
12+
ms.author: lagayhar
13+
author: lgayhardt
14+
---
15+
16+
# Continuously monitor your generative AI applications
17+
18+
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
19+
20+
Continuous advancements in Generative AI have led organizations to build increasingly complex applications to solve various problems (chat-bots, RAG systems, agentic systems, etc.). These applications are being used to drive innovation, improve customer experiences, and enhance decision-making. Although the models (for example, GPT-4) powering these Generative AI applications are extremely capable, continuous monitoring has never been more important to ensure high-quality, safe, and reliable results. Continuous monitoring is effective when multiple perspectives are considered when observing an application. These perspectives include token usage and cost, operational metrics – latency, request count, etc. - and, importantly, continuous evaluation. To learn more about evaluation, see [Evaluation of generative AI applications](../concepts/evaluation-approach-gen-ai.md).
21+
22+
Azure AI and Azure Monitor provide tools for you to continuously monitor the performance of your Generative AI applications from multiple perspectives. With Azure AI Online Evaluation, you can continuously evaluate your application agnostic of where it's deployed or what orchestration framework it's using (for example, LangChain). You can use various [built-in evaluators](../concepts/evaluation-metrics-built-in.md) which maintain parity with the [Azure AI Evaluation SDK](./develop/evaluate-sdk.md) or define your own custom evaluators. By continuously running the right evaluators over your collected trace data, your team can more effectively identify and mitigate security, quality, and safety concerns as they arise, either in pre-production or post-production. Azure AI Online Evaluation provides full integration with the comprehensive suite of observability tooling available in [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview), enabling you to build custom dashboards, visualize your evaluation results over time, and configure alerting for advanced application monitoring.
23+
24+
In summary, monitoring your generative AI applications has never been more important, due to the complexity and rapid evolvement of the AI industry. Azure AI Online Evaluation, integrated with Azure Monitor Application Insights, enables you to continuously evaluate your deployed applications to ensure that they're performant, safe, and produce high-quality results in production.
25+
26+
## Monitor your generative AI application
27+
28+
In this section, you'll learn how Azure AI integrates with Azure Monitor Application Insights to give you an out-of-the-box dashboard view that is tailored with insights regarding your generative AI app so you can stay updated with the latest status of your application.
29+
30+
### Insights for your generative AI application
31+
32+
If you haven’t set this up, here are some quick steps:
33+
34+
1. Navigate to your project in [Azure AI Foundry](https://ai.azure.com).
35+
1. Select the Tracing page on the left-hand side.
36+
1. Connect your Application Insights resource to your project.
37+
38+
If you already set up tracing in Azure AI Foundry portal, all you need to do is select the link to **Check out your Insights for Generative AI application dashboard**.
39+
40+
Once you have your data streaming into your Application Insights resource, you automatically can see it get populated in this customized dashboard.
41+
42+
:::image type="content" source="../media/how-to/online-evaluation/open-generative-ai-workbook.gif" alt-text="Animation of an Azure workbook showing Application Insights." lightbox="../media/how-to/online-evaluation/open-generative-ai-workbook.gif":::
43+
44+
This view is a great place for you to get started with your monitoring needs.
45+
46+
- You can view token consumption over time to understand if you need to increase your usage limits or do additional cost analysis.
47+
- You can view evaluation metrics as trend lines to understand the quality of your app on a daily basis.
48+
- You can debug when exceptions take place and drill into traces using the **Azure Monitor End-to-end transaction details view** to figure out what went wrong.
49+
50+
:::image type="content" source="../media/how-to/online-evaluation/custom-generative-ai-workbook.gif" alt-text="Animation of an Azure workbook showing graphs and end to end transaction details." lightbox="../media/how-to/online-evaluation/custom-generative-ai-workbook.gif":::
51+
52+
This is an Azure Workbook that is querying data stored in your Application Insights resource. You can customize this workbook and tailor this to fit your business needs.
53+
To learn more, see [editing Azure Workbooks](/azure/azure-monitor/visualize/workbooks-create-workbook).
54+
55+
This allows you to add additional custom evaluators that you might have logged or other markdown text to share summaries and use for reporting purposes.
56+
57+
You can also share this workbook with your team so they stay informed with the latest!
58+
59+
:::image type="content" source="../media/how-to/online-evaluation/share-azure-workbook.png" alt-text="Screenshot of an Azure Workbook showing the share button and share tab." lightbox="../media/how-to/online-evaluation/share-azure-workbook.png":::
60+
61+
> [!NOTE]
62+
> When sharing this workbook with your team members, they must have at least 'Reader' role to the connected Application Insights resource to view the displayed information.
63+
64+
## Related content
65+
66+
- [Trace your application with Azure AI Inference SDK](./develop/trace-local-sdk.md)
67+
- [Visualize your traces](./develop/visualize-traces.md)
68+
- [Evaluation of Generative AI Models & Applications](../concepts/evaluation-approach-gen-ai.md)
69+
- [Azure Monitor Application Insights](/azure/azure-monitor/app/app-insights-overview)
70+
- [Azure Workbooks](/azure/azure-monitor/visualize/workbooks-overview)

articles/ai-studio/how-to/online-evaluation.md

Lines changed: 3 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Continuously Monitor your Generative AI Applications
2+
title: Run evaluations online in Azure AI Foundry
33
titleSuffix: Azure AI Foundry
44
description: This article provides instructions on how to use online and remote evaluation to continuously monitor Generative AI Applications.
55
manager: scottpolly
@@ -13,7 +13,7 @@ ms.author: lagayhar
1313
author: lgayhardt
1414
---
1515

16-
# Continuously monitor your generative AI applications
16+
# Run evaluations online
1717

1818
[!INCLUDE [feature-preview](../includes/feature-preview.md)]
1919

@@ -122,7 +122,7 @@ Using the [Kusto Query Language (KQL)](/kusto/query/?view=microsoft-fabric&prese
122122
> [!IMPORTANT]
123123
> The KQL query used by the Online Evaluation service must output the following columns: `operation_Id`, `operation_ParentId`, and `gen_ai_response_id`. Additionally, each evaluator has its own input data requirements. The KQL query must output these columns to be used as inputs to the evaluators themselves. For a list of data requirements for evaluators, see [data requirements for built-in evaluators](./develop/evaluate-sdk.md#data-requirements-for-built-in-evaluators).
124124
125-
```SQL
125+
```kusto
126126
let gen_ai_spans = (
127127
dependencies
128128
| where isnotnull(customDimensions["gen_ai.system"])
@@ -325,44 +325,6 @@ name = "<my-online-evaluation-name>"
325325
project_client.evaluations.disable_schedule(name)
326326
```
327327

328-
## Monitor your generative AI application
329-
330-
In this section, you'll learn how Azure AI integrates with Azure Monitor Application Insights to give you an out-of-the-box dashboard view that is tailored with insights regarding your generative AI app so you can stay updated with the latest status of your application.
331-
332-
### Insights for your generative AI application
333-
334-
If you haven’t set this up, here are some quick steps:
335-
336-
1. Navigate to your project in [Azure AI Foundry](https://ai.azure.com).
337-
1. Select the Tracing page on the left-hand side.
338-
1. Connect your Application Insights resource to your project.
339-
340-
If you already set up tracing in Azure AI Foundry portal, all you need to do is select the link to **Check out your Insights for Generative AI application dashboard**.
341-
342-
Once you have your data streaming into your Application Insights resource, you automatically can see it get populated in this customized dashboard.
343-
344-
:::image type="content" source="../media/how-to/online-evaluation/open-generative-ai-workbook.gif" alt-text="Animation of an Azure workbook showing Application Insights." lightbox="../media/how-to/online-evaluation/open-generative-ai-workbook.gif":::
345-
346-
This view is a great place for you to get started with your monitoring needs.
347-
348-
- You can view token consumption over time to understand if you need to increase your usage limits or do additional cost analysis.
349-
- You can view evaluation metrics as trend lines to understand the quality of your app on a daily basis.
350-
- You can debug when exceptions take place and drill into traces using the **Azure Monitor End-to-end transaction details view** to figure out what went wrong.
351-
352-
:::image type="content" source="../media/how-to/online-evaluation/custom-generative-ai-workbook.gif" alt-text="Animation of an Azure workbook showing graphs and end to end transaction details." lightbox="../media/how-to/online-evaluation/custom-generative-ai-workbook.gif":::
353-
354-
This is an Azure Workbook that is querying data stored in your Application Insights resource. You can customize this workbook and tailor this to fit your business needs.
355-
To learn more, see [editing Azure Workbooks](/azure/azure-monitor/visualize/workbooks-create-workbook).
356-
357-
This allows you to add additional custom evaluators that you might have logged or other markdown text to share summaries and use for reporting purposes.
358-
359-
You can also share this workbook with your team so they stay informed with the latest!
360-
361-
:::image type="content" source="../media/how-to/online-evaluation/share-azure-workbook.png" alt-text="Screenshot of an Azure Workbook showing the share button and share tab." lightbox="../media/how-to/online-evaluation/share-azure-workbook.png":::
362-
363-
> [!NOTE]
364-
> When sharing this workbook with your team members, they must have atleast 'Reader' role to the connected Application Insights resource to view the displayed information.
365-
366328
## Related content
367329

368330
- [Trace your application with Azure AI Inference SDK](./develop/trace-local-sdk.md)

articles/ai-studio/toc.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -331,6 +331,8 @@ items:
331331
- name: View evaluation results in the portal
332332
href: how-to/evaluate-results.md
333333
displayName: accuracy,metrics
334+
- name: Run evaluations online
335+
href: how-to/online-evaluation.md
334336
- name: Evaluate flows in the portal
335337
items:
336338
- name: Submit batch run and evaluate a flow
@@ -342,7 +344,7 @@ items:
342344
- name: Deploy and monitor generative AI apps
343345
items:
344346
- name: Continuously monitor your applications
345-
href: how-to/online-evaluation.md
347+
href: how-to/monitor-application.md
346348
- name: Deploy and monitor flows
347349
items:
348350
- name: Deploy a flow for real-time inference

0 commit comments

Comments
 (0)