You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Monitoring is available for agents in a [standard agent setup](../quickstart.md?pivots=programming-language-csharp#choose-basic-or-standard-agent-setup).
17
+
18
+
## Dashboards
19
+
20
+
Azure AI Agent Service provides out-of-box dashboards. There are two key dashboards to monitor your resource:
21
+
22
+
- The metrics dashboard in the AI Foundry resource view
23
+
- The dashboard in the overview pane within the Azure portal
24
+
25
+
To access the monitoring dashboards, sign in to the [Azure portal](https://portal.azure.com) and then select **Monitoring** in the left navigation menu, then click **Metrics**.
26
+
27
+
28
+
:::image type="content" source="../media/monitoring/dashboard.png" alt-text="Screenshot that shows out-of-box dashboards for a resource in the Azure portal." lightbox="../media/monitoring/dashboard.png" border="false":::
29
+
30
+
## Azure monitor platform metrics
31
+
32
+
Azure Monitor provides platform metrics for most services. These metrics are:
33
+
34
+
* Individually defined for each namespace.
35
+
* Stored in the Azure Monitor time-series metrics database.
36
+
* Lightweight and capable of supporting near real-time alerting.
37
+
* Used to track the performance of a resource over time.
38
+
* Collection: Azure Monitor collects platform metrics automatically. No configuration is required.
39
+
40
+
For a list of all metrics it's possible to gather for all resources in Azure Monitor, see [Supported metrics in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
41
+
42
+
Azure AI Agent Service has commonality with a subset of Azure AI services. For a list of available metrics for Azure AI Agent Service, see the [monitoring data reference](../reference/monitor-service.md#metrics).
43
+
44
+
## Analyze monitoring data
45
+
46
+
There are many tools for analyzing monitoring data.
47
+
48
+
### Azure Monitor tools
49
+
50
+
Azure Monitor supports the [metrics explorer](/azure/azure-monitor/essentials/metrics-getting-started), a tool in the Azure portal that allows you to view and analyze metrics for Azure resources. For more information, see Analyze metrics with Azure Monitor metrics explorer.
51
+
52
+
## Azure Monitor export tools
53
+
54
+
You can get data out of Azure Monitor into other tools by using the [REST API for metrics](/rest/api/monitor/operation-groups) to extract metric data from the Azure Monitor metrics database. The API supports filter expressions to refine the data retrieved. For more information, see [Azure Monitor REST API reference](/rest/api/monitor/filter-syntax).
55
+
56
+
To get started with the REST API for Azure Monitor, see [Azure monitoring REST API walkthrough](/azure/azure-monitor/essentials/rest-api-walkthrough).
57
+
58
+
## Alerts
59
+
60
+
Azure Monitor alerts proactively notify you when specific conditions are found in your monitoring data. Alerts allow you to identify and address issues in your system before your customers notice them. For more information, see Azure Monitor alerts.
61
+
62
+
There are many sources of common alerts for Azure resources. [The Azure Monitor Baseline Alerts (AMBA)](https://aka.ms/amba) site provides a semi-automated method of implementing important platform metric alerts, dashboards, and guidelines. The site applies to a continually expanding subset of Azure services, including all services that are part of the Azure Landing Zone (ALZ).
63
+
64
+
The common alert schema standardizes the consumption of Azure Monitor alert notifications. For more information, see [Common alert schema](/azure/azure-monitor/alerts/alerts-common-schema).
65
+
66
+
[Metric alerts](/azure/azure-monitor/alerts/alerts-types#metric-alerts) evaluate resource metrics at regular intervals. Metric alerts can also apply multiple conditions and dynamic thresholds.
67
+
68
+
Every organization's alerting needs vary and can change over time. Generally, all alerts should be actionable and have a specific intended response if the alert occurs. If an alert doesn't require an immediate response, the condition can be captured in a report rather than an alert. Some use cases might require alerting anytime certain error conditions exist. In other cases, you might need alerts for errors that exceed a certain threshold for a designated time period.
69
+
70
+
Depending on what type of application you're developing with your use of Azure AI Agent Service, [Azure Monitor Application Insights](/azure/azure-monitor/overview) might offer more monitoring benefits at the application layer.
71
+
72
+
### Azure AI Agent service alert rules
73
+
74
+
You can set alerts for any metric listed in the [monitoring data reference](../reference/monitor-service.md).
- See [Monitoring data reference](../reference/monitor-service.md) for a reference of the metrics and other important values created for Azure AI Agent Service.
81
+
- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
See [Monitor Azure AI Agent Service](../how-to/metrics.md) for details on the data you can collect on your agents.
17
+
18
+
## Metrics
19
+
20
+
Here are the most important metrics we think you should monitor for Azure AI Agent Service. Later in this article is a longer list of all available metrics which contains more details on metrics in this shorter list. _See the below list for most up to date information. We're working on refreshing the tables in the following sections._
21
+
22
+
-[Runs](#category-agents)
23
+
-[Indexed files](#category-agents)
24
+
<!-- - Indexed files -->
25
+
26
+
## Supported metrics
27
+
28
+
This section lists all the automatically collected platform metrics for this service. These metrics are also part of the global list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/reference/supported-metrics/metrics-index#supported-metrics-per-resource-type).
|Runs <br> The number of runs in a given timeframe. |`Runs`| Count | Total (sum), Average, Minimum, Maximum, Count |`ResourceId`, `ProjectId`, `AgentId`, `StreamType`, `Region`, `StatusCode (successful, clienterrors, server errors)`, `RunStatus (started, completed, failed, cancelled, expired)`| PT1M | Yes |
39
+
|Indexed files <br> Number of files indexed for file search |`IndexedFiles`| Count | Count, Average, Minimum, Maximum |`ResourceId`, `ProjectId`, `VectorStoreId`, `StreamType`, `Region`, `Status`, `ErrorCode`| PT1M | Yes |
40
+
41
+
42
+
## Related content
43
+
44
+
- See [Monitor Azure AI Agent Service](../how-to/metrics.md) for a description of monitoring Azure AI Agent Service.
45
+
- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
Copy file name to clipboardExpand all lines: articles/ai-services/agents/whats-new.md
+11-1Lines changed: 11 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ author: aahill
7
7
ms.author: aahi
8
8
ms.service: azure-ai-agent-service
9
9
ms.topic: overview
10
-
ms.date: 01/30/2025
10
+
ms.date: 04/23/2025
11
11
ms.custom: azure-ai-agents
12
12
---
13
13
@@ -16,9 +16,19 @@ ms.custom: azure-ai-agents
16
16
This article provides a summary of the latest releases and major documentation updates for Azure AI Agent Service.
17
17
18
18
## April 2025
19
+
20
+
### Azure monitor integration
21
+
22
+
You can now see metrics related to Agents in Azure monitor
23
+
* The number of files indexed for file search.
24
+
* The number of runs in a given timeframe.
25
+
26
+
See the [Azure monitor](./how-to/metrics.md) and [metrics reference](./reference/monitor-service.md) articles for more information.
27
+
19
28
### BYO thread storage
20
29
The Standard Agent Setup now supports **Bring Your Own (BYO) thread storage using an Azure Cosmos DB for NoSQL account**. This feature ensures all thread messages and conversation history are stored in your own resources. See the [Quickstart](./quickstart.md) for more information on how to deploy a Standard agent project.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/responses.md
+21-94Lines changed: 21 additions & 94 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to use Azure OpenAI's new stateful Responses API.
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: include
8
-
ms.date: 04/23/2025
8
+
ms.date: 03/21/2025
9
9
author: mrbullwinkle
10
10
ms.author: mbullwin
11
11
ms.custom: references_regions
@@ -56,9 +56,9 @@ Not every model is available in the regions supported by the responses API. Chec
56
56
> - Structured outputs
57
57
> - tool_choice
58
58
> - image_url pointing to an internet address
59
-
> - The web search tool is also not supported, and isn't part of the `2025-03-01-preview` API.
59
+
> - The web search tool is also not supported, and is not part of the `2025-03-01-preview` API.
60
60
>
61
-
> There's also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
61
+
> There is also a known issue with vision performance when using the Responses API, particularly with OCR tasks. As a temporary workaround set image detail to `high`. This article will be updated once this issue is resolved and as any additional feature support is added.
"text": "It looks like you're testing out how this works! How can I assist you today?",
170
+
"text": "Great! How can I help you today?",
194
171
"type": "output_text"
195
172
}
196
173
],
197
174
"role": "assistant",
198
-
"status": "completed",
175
+
"status": null,
199
176
"type": "message"
200
177
}
201
178
],
202
-
"parallel_tool_calls": true,
179
+
"output_text": "Great! How can I help you today?",
180
+
"parallel_tool_calls": null,
203
181
"temperature": 1.0,
204
-
"tool_choice": "auto",
182
+
"tool_choice": null,
205
183
"tools": [],
206
184
"top_p": 1.0,
207
185
"max_output_tokens": null,
208
186
"previous_response_id": null,
209
-
"reasoning": {
210
-
"effort": null,
211
-
"generate_summary": null,
212
-
"summary": null
213
-
},
214
-
"service_tier": null,
187
+
"reasoning": null,
215
188
"status": "completed",
216
-
"text": {
217
-
"format": {
218
-
"type": "text"
219
-
}
220
-
},
221
-
"truncation": "disabled",
189
+
"text": null,
190
+
"truncation": null,
222
191
"usage": {
223
-
"input_tokens": 12,
224
-
"input_tokens_details": {
225
-
"cached_tokens": 0
226
-
},
227
-
"output_tokens": 18,
192
+
"input_tokens": 20,
193
+
"output_tokens": 11,
228
194
"output_tokens_details": {
229
195
"reasoning_tokens": 0
230
196
},
231
-
"total_tokens": 30
197
+
"total_tokens": 31
232
198
},
233
199
"user": null,
234
-
"store": true
200
+
"reasoning_effort": null
235
201
}
236
202
```
237
203
238
204
---
239
205
240
-
Unlike the chat completions API, the responses API is asynchronous. More complex requests may not be completed by the time that an initial response is returned by the API. This is similar to how the Assistants API handles [thread/run status](/azure/ai-services/openai/how-to/assistant#retrieve-thread-status).
241
-
242
-
Note in the response output that the response object contains a `status` which can be monitored to determine when the response is finally complete. `status` can contain a value of `completed`, `failed`, `in_progress`, or `incomplete`.
243
-
244
-
### Retrieve an individual response status
245
-
246
-
In the previous Python examples we created a variable `response_id` and set it equal to the `response.id` of our `client.response.create()` call. We can then pass client.response.retrieve() to pull the current status of our response.
Depending on the complexity of your request it isn't uncommon to have an initial response with a status of `in_progress` with message output not yet generated. In that case you can create a loop to monitor the status of the response with code. The example below is for demonstration purposes only and is intended to be run in a Jupyter notebook. This code assumes you have already run the two previous Python examples and the Azure OpenAI client as well as `retrieve_response` have already been defined:
257
-
258
-
```python
259
-
import time
260
-
from IPython.display import clear_output
261
-
262
-
start_time = time.time()
263
-
264
-
status = retrieve_response.status
265
-
266
-
while status notin ["completed", "failed", "incomplete"]:
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we'll force a screenshot to be taken for each iteration.
888
+
This function captures the current browser state as an image and returns it as a base64-encoded string, ready to be sent to the model. We'll constantly do this in a loop after each step allowing the model to see if the command it tried to execute was successful or not, which then allows it to adjust based on the contents of the screenshot. We could let the model decide if it needs to take a screenshot, but for simplicity we will force a screenshot to be taken for each iteration.
0 commit comments