Skip to content

Commit b16f322

Browse files
committed
update batch inference
1 parent b1e1f73 commit b16f322

File tree

2 files changed

+283
-0
lines changed

2 files changed

+283
-0
lines changed
Lines changed: 281 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,281 @@
1+
---
2+
title: Trigger batch inference with trained model
3+
titleSuffix: Azure Cognitive Services
4+
description: Trigger batch inference with trained model
5+
services: cognitive-services
6+
author: mrbullwinkle
7+
manager: nitinme
8+
ms.service: cognitive-services
9+
ms.subservice: anomaly-detector
10+
ms.topic: conceptual
11+
ms.date: 11/01/2022
12+
ms.author: mbullwin
13+
---
14+
15+
# Trigger batch inference with trained model
16+
17+
18+
You could choose the batch inference API, or the streaming inference API for detection.
19+
20+
| Batch inference API | Streaming inference API |
21+
| ------------- | ---------------- |
22+
| More suitable for batch use cases when customers don’t need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
23+
24+
|API Name| Method | Path | Description |
25+
| ------ | ---- | ----------- | ------ |
26+
|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`:detect-batch | Trigger an asynchronous inference with `modelId` which works in a batch scenario |
27+
|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
28+
|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`:detect-last | Trigger a synchronous inference with `modelId` which works in a streaming scenario |
29+
30+
31+
## Trigger a batch inference
32+
33+
To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps.
34+
35+
This inference is asynchronous, so the results are not returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
36+
37+
Failures are usually caused by model issues or data issues. You cannot perform inference if the model is not ready or the data link is invalid. Make sure that the training data and inference data are consistent, which means they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables will not pass the data verification phase and errors will occur. Data verification is deferred so that you will get error message only when you query the results.
38+
39+
### Request
40+
41+
A sample request:
42+
43+
```json
44+
{
45+
"dataSource": "{{dataSource}}",
46+
"topContributorCount": 3,
47+
"startTime": "2021-01-02T12:00:00Z",
48+
"endTime": "2021-01-03T00:00:00Z"
49+
}
50+
```
51+
#### Required parameters
52+
53+
* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in the Azure Blob Storage. The schema should be the same with training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well.
54+
* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
55+
* **endTime**: The end time of data used for inference which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point.
56+
57+
#### Optional parameters
58+
59+
* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top 5 contributed variables in detection results, than you should fill this field with 5. The default number is **10**.
60+
61+
62+
### Response
63+
64+
A sample response:
65+
66+
```json
67+
{
68+
"resultId": "f5c8c004-555b-11ed-85c0-36f8cdfb3365",
69+
"summary": {
70+
"status": "CREATED",
71+
"errors": [],
72+
"variableStates": [],
73+
"setupInfo": {
74+
"dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
75+
"topContributorCount": 3,
76+
"startTime": "2021-01-02T12:00:00Z",
77+
"endTime": "2021-01-03T00:00:00Z"
78+
}
79+
},
80+
"results": []
81+
}
82+
```
83+
* **resultId**: This is the information that you will need to trigger **Get Batch Inference Results API**.
84+
* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results.
85+
86+
## Get batch detection results
87+
88+
There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of:
89+
**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}**
90+
91+
### Response
92+
93+
A sample response:
94+
95+
```json
96+
{
97+
"resultId": "f5c8c004-555b-11ed-85c0-36f8cdfb3365",
98+
"summary": {
99+
"status": "READY",
100+
"errors": [],
101+
"variableStates": [
102+
{
103+
"variable": "series_0",
104+
"filledNARatio": 0.0,
105+
"effectiveCount": 721,
106+
"firstTimestamp": "2021-01-02T12:00:00Z",
107+
"lastTimestamp": "2021-01-03T00:00:00Z"
108+
},
109+
{
110+
"variable": "series_1",
111+
"filledNARatio": 0.0,
112+
"effectiveCount": 721,
113+
"firstTimestamp": "2021-01-02T12:00:00Z",
114+
"lastTimestamp": "2021-01-03T00:00:00Z"
115+
},
116+
{
117+
"variable": "series_2",
118+
"filledNARatio": 0.0,
119+
"effectiveCount": 721,
120+
"firstTimestamp": "2021-01-02T12:00:00Z",
121+
"lastTimestamp": "2021-01-03T00:00:00Z"
122+
},
123+
{
124+
"variable": "series_3",
125+
"filledNARatio": 0.0,
126+
"effectiveCount": 721,
127+
"firstTimestamp": "2021-01-02T12:00:00Z",
128+
"lastTimestamp": "2021-01-03T00:00:00Z"
129+
},
130+
{
131+
"variable": "series_4",
132+
"filledNARatio": 0.0,
133+
"effectiveCount": 721,
134+
"firstTimestamp": "2021-01-02T12:00:00Z",
135+
"lastTimestamp": "2021-01-03T00:00:00Z"
136+
}
137+
],
138+
"setupInfo": {
139+
"dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
140+
"topContributorCount": 3,
141+
"startTime": "2021-01-02T12:00:00Z",
142+
"endTime": "2021-01-03T00:00:00Z"
143+
}
144+
},
145+
"results": [
146+
{
147+
"timestamp": "2021-01-02T12:00:00Z",
148+
"value": {
149+
"isAnomaly": false,
150+
"severity": 0.0,
151+
"score": 0.3377174139022827,
152+
"interpretation": []
153+
},
154+
"errors": []
155+
},
156+
{
157+
"timestamp": "2021-01-02T12:01:00Z",
158+
"value": {
159+
"isAnomaly": false,
160+
"severity": 0.0,
161+
"score": 0.24631972312927247,
162+
"interpretation": []
163+
},
164+
"errors": []
165+
},
166+
{
167+
"timestamp": "2021-01-02T12:02:00Z",
168+
"value": {
169+
"isAnomaly": false,
170+
"severity": 0.0,
171+
"score": 0.16678125858306886,
172+
"interpretation": []
173+
},
174+
"errors": []
175+
},
176+
{
177+
"timestamp": "2021-01-02T12:03:00Z",
178+
"value": {
179+
"isAnomaly": false,
180+
"severity": 0.0,
181+
"score": 0.23783254623413086,
182+
"interpretation": []
183+
},
184+
"errors": []
185+
},
186+
{
187+
"timestamp": "2021-01-02T12:04:00Z",
188+
"value": {
189+
"isAnomaly": false,
190+
"severity": 0.0,
191+
"score": 0.24804904460906982,
192+
"interpretation": []
193+
},
194+
"errors": []
195+
},
196+
{
197+
"timestamp": "2021-01-02T12:05:00Z",
198+
"value": {
199+
"isAnomaly": false,
200+
"severity": 0.0,
201+
"score": 0.11487171649932862,
202+
"interpretation": []
203+
},
204+
"errors": []
205+
},
206+
{
207+
"timestamp": "2021-01-02T12:06:00Z",
208+
"value": {
209+
"isAnomaly": true,
210+
"severity": 0.32980116622958083,
211+
"score": 0.5666913509368896,
212+
"interpretation": [
213+
{
214+
"variable": "series_2",
215+
"contributionScore": 0.4130149677604554,
216+
"correlationChanges": {
217+
"changedVariables": [
218+
"series_0",
219+
"series_4",
220+
"series_3"
221+
]
222+
}
223+
},
224+
{
225+
"variable": "series_3",
226+
"contributionScore": 0.2993065960239115,
227+
"correlationChanges": {
228+
"changedVariables": [
229+
"series_0",
230+
"series_4",
231+
"series_3"
232+
]
233+
}
234+
},
235+
{
236+
"variable": "series_1",
237+
"contributionScore": 0.287678436215633,
238+
"correlationChanges": {
239+
"changedVariables": [
240+
"series_0",
241+
"series_4",
242+
"series_3"
243+
]
244+
}
245+
}
246+
]
247+
},
248+
"errors": []
249+
}
250+
]
251+
}
252+
```
253+
254+
The response contains the result status, variable information, inference parameters, and inference results.
255+
256+
* **variableStates**: This lists the information of each variable in the inference request.
257+
* **setupInfo**: This is the request body submitted for this inference.
258+
* **results**: This contains the detection results. There are three typical types of detection results.
259+
260+
* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there is insufficient historical data, so inference cannot be performed on them. In this case, the error message can be ignored.
261+
262+
* **isAnomaly**: `false` indicates the current timestamp is not an anomaly.`true` indicates an anomaly at the current timestamp.
263+
* `severity` indicates the relative severity of the anomaly and for abnormal data it is always greater than 0.
264+
* `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
265+
266+
* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
267+
268+
* **contributors**: This is a list containing the contribution score of each variable. Higher contribution scores indicate higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
269+
270+
* **correlationChanges**: This field only appears when a timestamp is detected as anomalous, which included in interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
271+
272+
* **changedVariables**: This field will show which variables that have significant change in correlation with `variable`. The variables in this list is ranked by the extent of correlation changes.
273+
274+
> [!NOTE]
275+
> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
276+
> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
277+
> Please refer to the [FAQ](../../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
278+
279+
## Next steps
280+
281+
* [Best practices of multivariate anomaly detection](../../concepts/best-practices-multivariate.md)

articles/cognitive-services/Anomaly-Detector/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,8 @@
5252
items:
5353
- name: Train a model
5454
href: how-to/train-model.md
55+
- name: Batch inference
56+
href: how-to/batch-inference.md
5557
- name: Enterprise readiness
5658
items:
5759
- name: Set up Virtual Networks

0 commit comments

Comments
 (0)