You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-deploy-managed-online-endpoints.md
+3-22Lines changed: 3 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -320,32 +320,13 @@ Autoscale automatically runs the right amount of resources to handle the load on
320
320
321
321
### (Optional) Monitor SLA by using Azure Monitor
322
322
323
-
To view metrics and set alerts based on your SLA, complete the steps that are described in [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
323
+
To view metrics and set alerts based on your SLA, complete the steps that are described in [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#monitor).
324
324
325
325
### (Optional) Integrate with Log Analytics
326
326
327
-
The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs.
327
+
The `get-logs` command provides only the last few hundred lines of logs from an automatically selected instance. However, Log Analytics provides a way to durably store and analyze logs. For more information on using logging, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#logs)
328
328
329
-
First, create a Log Analytics workspace by completing the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace).
330
-
331
-
Then, in the Azure portal:
332
-
333
-
1. Go to the resource group.
334
-
1. Select your endpoint.
335
-
1. Select the **ARM resource page**.
336
-
1. Select **Diagnostic settings**.
337
-
1. Select **Add settings**.
338
-
1. Select to enable sending console logs to the Log Analytics workspace.
339
-
340
-
The logs might take up to an hour to connect. After an hour, send some scoring requests, and then check the logs by using the following steps:
341
-
342
-
1. Open the Log Analytics workspace.
343
-
1. In the left menu, select **Logs**.
344
-
1. Close the **Queries** dialog that automatically opens.
@@ -28,7 +28,7 @@ In this article you learn how to:
28
28
- Deploy an Azure Machine Learning online endpoint.
29
29
- You must have at least [Reader access](../role-based-access-control/role-assignments-portal.md) on the endpoint.
30
30
31
-
## View metrics
31
+
## Metrics
32
32
33
33
Use the following steps to view metrics for a managed endpoint or deployment:
34
34
1. Go to the [Azure portal](https://portal.azure.com).
@@ -38,11 +38,11 @@ Use the following steps to view metrics for a managed endpoint or deployment:
38
38
39
39
1. In the left-hand column, select **Metrics**.
40
40
41
-
## Available metrics
41
+
###Available metrics
42
42
43
43
Depending on the resource that you select, the metrics that you see will be different. Metrics are scoped differently for online endpoints and online deployments.
44
44
45
-
### Metrics at endpoint scope
45
+
####Metrics at endpoint scope
46
46
47
47
- Request Latency
48
48
- Request Latency P50 (Request latency at the 50th percentile)
@@ -59,13 +59,13 @@ Split on the following dimensions:
59
59
- Status Code
60
60
- Status Code Class
61
61
62
-
#### Bandwidth throttling
62
+
**Bandwidth throttling**
63
63
64
64
Bandwidth will be throttled if the limits are exceeded for _managed_ online endpoints (see managed online endpoints section in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled:
65
65
- Monitor the "Network bytes" metric
66
66
- The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.
67
67
68
-
### Metrics at deployment scope
68
+
####Metrics at deployment scope
69
69
70
70
- CPU Utilization Percentage
71
71
- Deployment Capacity (the number of instances of the requested instance type)
@@ -78,11 +78,11 @@ Split on the following dimension:
78
78
79
79
- InstanceId
80
80
81
-
## Create a dashboard
81
+
###Create a dashboard
82
82
83
83
You can create custom dashboards to visualize data from multiple sources in the Azure portal, including the metrics for your online endpoint. For more information, see [Create custom KPI dashboards using Application Insights](../azure-monitor/app/tutorial-app-dashboards.md#add-custom-metric-chart).
84
84
85
-
## Create an alert
85
+
###Create an alert
86
86
87
87
You can also create custom alerts to notify you of important status updates to your online endpoint:
88
88
@@ -97,6 +97,133 @@ You can also create custom alerts to notify you of important status updates to y
97
97
1. Select **Add action groups** > **Create action groups** to specify what should happen when your alert is triggered.
98
98
99
99
1. Choose **Create alert rule** to finish creating your alert.
100
+
1.
101
+
## Logs
102
+
103
+
There are three logs that can be enabled for online endpoints:
104
+
105
+
***AMLOnlineEndpointTrafficLog**: You could choose to enable traffic logs if you want to check the information of your request. Below are some cases:
106
+
107
+
* If the response isn't 200, you could check the value of the column “ResponseCodeReason” to see what might happen. And check the reason following the below link https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/response_code_details
108
+
109
+
* You could check the response code and response reason of your model from the column “ModelStatusCode” and “ModelStatusReason”.
110
+
111
+
* You want to check the duration of the request like total duration, the request/response duration, and the delay caused by the network throttling. You could check it from the logs to see the breakdown latency.
112
+
113
+
* If you want to check how many requests or failed requests recently. You could also enable the logs.
114
+
115
+
***AMLOnlineEndpointConsoleLog**: Contains logs that the user containers output to the console. Below are some cases:
116
+
117
+
* If the user container fails to start, the console log may be useful for debugging.
118
+
119
+
* Monitor user container behavior, and make sure that all requests are correctly handled.
120
+
121
+
* Write request IDs in the console log. Joining the request ID, the AMLOnlineEndpointConsoleLog, and AMLOnlineEndpointTrafficLog in the Log Analytics workspace, you can trace a request from the network entry point of an online endpoint to the container.
122
+
123
+
* Users may also use this log for performance analysis in determining the time required by the model to process each request.
124
+
125
+
***AMLOnlineEndpointEventLog**: Contains event information regarding the user container’s life cycle. Currently, we provide information on the following types of events:
126
+
127
+
| Name | Message |
128
+
| ----- | ----- |
129
+
| BackOff | Back-off restarting failed container
130
+
| Pulled | Container image "\<IMAGE\_NAME\>" already present on machine
131
+
| Killing | Container inference-server failed liveness probe, will be restarted
> Logging uses Azure Log Analytics. If you do not currently have a Log Analytics workspace, you can create one using the steps in [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md#create-a-workspace).
147
+
148
+
1. In the [Azure portal](https://portal.azure.com), go to the resource group that contains your endpoint and then select the endpoint.
149
+
1. From the **Monitoring** section on the left of the page, select **Diagnostic settings** and then **Add settings**.
150
+
1. Select the log categories to enable, select **Send to Log Analytics workspace**, and then select the Log Analytics workspace to use. Finally, enter a **Diagnostic setting name** and select **Save**.
> It may take up to an hour for the connection to the Log Analytics workspace to be enabled. Wait an hour before continuing with the next steps.
156
+
157
+
1. Submit scoring requests to the endpoint. This activity should create entries in the logs.
158
+
1. Open the Log Analytics workspace and select **Logs** from the left of the screen.
159
+
1. Close the **Queries** dialog that automatically opens, and then double-click the **AmlOnlineEndpointConsoleLog**. If you don't see it, use the **Search** field.
In Azure Log Analytics workspace, see the following example queries:
170
+
171
+
* Online endpoint console logs
172
+
* Online endpoint failed requests
173
+
174
+
### Log column details
175
+
176
+
The following tables provide details on the data stored in each log:
177
+
178
+
**AMLOnlineEndpointTrafficLog**
179
+
180
+
| Field name | Description |
181
+
| ---- | ---- |
182
+
| Method | The requested method from client.
183
+
| Path | The requested path from client.
184
+
| SubscriptionId | The machine learning subscription ID of the online endpoint.
185
+
| WorkspaceId | The machine learning workspace ID of the online endpoint.
186
+
| EndpointName | The name of the online endpoint.
187
+
| DeploymentName | The name of the online deployment.
188
+
| Protocol | The protocol of the request.
189
+
| ResponseCode | The final response code returned to the user.
190
+
| ResponseCodeReason | The final response code reason returned to the user.
191
+
| ModelStatusCode | The response status code from model.
192
+
| ModelStatusReason | The response status reason from model.
193
+
| RequestPayloadSize | The total bytes received from the user client.
194
+
| ResponsePayloadSize | The total bytes sent back to the user client.
195
+
| UserAgent | The user-agent header of the request.
196
+
| XRequestId | The request ID generated by Azure Machine Learning for internal tracing.
197
+
| XMSClientRequestId | The tracking ID generated by user client.
198
+
| TotalDurationMs | Duration in milliseconds from the request start time to the last response byte sent back to the user client. If the user client disconnected, it measures from the start time to client disconnect time.
199
+
| RequestDurationMs | Duration in milliseconds from the request start time to the last byte of the request received from the user client.
200
+
| ResponseDurationMs | Duration in milliseconds from the request start time to the first response byte read from the model.
201
+
| RequestThrottlingDelayMs | Delay in milliseconds in request data transfer due to network throttling.
202
+
| ResponseThrottlingDelayMs | Delay in milliseconds in response data transfer due to network throttling.
203
+
204
+
**AMLOnlineEndpointConsoleLog**
205
+
206
+
| Field Name | Description |
207
+
| ----- | ----- |
208
+
| TimeGenerated | The timestamp (UTC) of when the log was generated.
209
+
| OperationName | The operation associated with log record.
210
+
| InstanceId | The ID of the instance that generated this log record.
211
+
| DeploymentName | The name of the deployment associated with the log record.
212
+
| ContainerName | The name of the container where the log was generated.
213
+
| Message | The content of the log.
214
+
215
+
**AMLOnlineEndpointEventLog**
216
+
217
+
218
+
| Field Name | Description |
219
+
| ----- | ----- |
220
+
| TimeGenerated | The timestamp (UTC) of when the log was generated.
221
+
| OperationName | The operation associated with log record.
222
+
| InstanceId | The ID of the instance that generated this log record.
223
+
| DeploymentName | The name of the deployment associated with the log record.
0 commit comments