Skip to content

Commit a851876

Browse files
author
Learn-Build-Service
committed
Merge remote-tracking branch 'local-source-repo/main' into temp-08138660-66e3-4e53-b347-660d167db086
2 parents 6adcf51 + 8d08fe7 commit a851876

File tree

4,983 files changed

+366283
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

4,983 files changed

+366283
-0
lines changed
Lines changed: 281 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,281 @@
1+
---
2+
title: Trigger batch inference with trained model
3+
titleSuffix: Azure AI services
4+
description: Trigger batch inference with trained model
5+
#services: cognitive-services
6+
author: mrbullwinkle
7+
manager: nitinme
8+
ms.service: azure-ai-anomaly-detector
9+
ms.topic: conceptual
10+
ms.date: 01/18/2024
11+
ms.author: mbullwin
12+
---
13+
14+
# Trigger batch inference with trained model
15+
16+
[!INCLUDE [Deprecation announcement](../includes/deprecation.md)]
17+
18+
You could choose the batch inference API, or the streaming inference API for detection.
19+
20+
| Batch inference API | Streaming inference API |
21+
| ------------- | ---------------- |
22+
| More suitable for batch use cases when customers don’t need to get inference results immediately and want to detect anomalies and get results over a longer time period.| When customers want to get inference immediately and want to detect multivariate anomalies in real-time, this API is recommended. Also suitable for customers having difficulties conducting the previous compressing and uploading process for inference. |
23+
24+
|API Name| Method | Path | Description |
25+
| ------ | ---- | ----------- | ------ |
26+
|**Batch Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-batch | Trigger an asynchronous inference with `modelId`, which works in a batch scenario |
27+
|**Get Batch Inference Results**| GET | `{endpoint}`/anomalydetector/v1.1/multivariate/detect-batch/`{resultId}` | Get batch inference results with `resultId` |
28+
|**Streaming Inference**| POST | `{endpoint}`/anomalydetector/v1.1/multivariate/models/`{modelId}`: detect-last | Trigger a synchronous inference with `modelId`, which works in a streaming scenario |
29+
30+
## Trigger a batch inference
31+
32+
To perform batch inference, provide the blob URL containing the inference data, the start time, and end time. For inference data volume, at least `1 sliding window` length and at most **20000** timestamps.
33+
34+
To get better performance, we recommend you send out no more than 150,000 data points per batch inference. *(Data points = Number of variables * Number of timestamps)*
35+
36+
This inference is asynchronous, so the results aren't returned immediately. Notice that you need to save in a variable the link of the results in the **response header** which contains the `resultId`, so that you may know where to get the results afterwards.
37+
38+
Failures are usually caused by model issues or data issues. You can't perform inference if the model isn't ready or the data link is invalid. Make sure that the training data and inference data are consistent, meaning they should be **exactly** the same variables but with different timestamps. More variables, fewer variables, or inference with a different set of variables won't pass the data verification phase and errors will occur. Data verification is deferred so that you'll get error messages only when you query the results.
39+
40+
### Request
41+
42+
A sample request:
43+
44+
```json
45+
{
46+
"dataSource": "{{dataSource}}",
47+
"topContributorCount": 3,
48+
"startTime": "2021-01-02T12:00:00Z",
49+
"endTime": "2021-01-03T00:00:00Z"
50+
}
51+
```
52+
#### Required parameters
53+
54+
* **dataSource**: This is the Blob URL that linked to your folder or CSV file located in Azure Blob Storage. The schema should be the same as your training data, either OneTable or MultiTable, and the variable number and name should be exactly the same as well.
55+
* **startTime**: The start time of data used for inference. If it's earlier than the actual earliest timestamp in the data, the actual earliest timestamp will be used as the starting point.
56+
* **endTime**: The end time of data used for inference, which must be later than or equal to `startTime`. If `endTime` is later than the actual latest timestamp in the data, the actual latest timestamp will be used as the ending point.
57+
58+
#### Optional parameters
59+
60+
* **topContributorCount**: This is a number that you could specify N from **1 to 30**, which will give you the details of top N contributed variables in the anomaly results. For example, if you have 100 variables in the model, but you only care the top five contributed variables in detection results, then you should fill this field with 5. The default number is **10**.
61+
62+
### Response
63+
64+
A sample response:
65+
66+
```json
67+
{
68+
"resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
69+
"summary": {
70+
"status": "CREATED",
71+
"errors": [],
72+
"variableStates": [],
73+
"setupInfo": {
74+
"dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
75+
"topContributorCount": 3,
76+
"startTime": "2021-01-02T12:00:00Z",
77+
"endTime": "2021-01-03T00:00:00Z"
78+
}
79+
},
80+
"results": []
81+
}
82+
```
83+
* **resultId**: This is the information that you'll need to trigger **Get Batch Inference Results API**.
84+
* **status**: This indicates whether you trigger a batch inference task successfully. If you see **CREATED**, then you don't need to trigger this API again, you should use the **Get Batch Inference Results API** to get the detection status and anomaly results.
85+
86+
## Get batch detection results
87+
88+
There's no content in the request body, what's required only is to put the resultId in the API path, which will be in a format of:
89+
**{{endpoint}}anomalydetector/v1.1/multivariate/detect-batch/{{resultId}}**
90+
91+
### Response
92+
93+
A sample response:
94+
95+
```json
96+
{
97+
"resultId": "aaaaaaaa-5555-1111-85bb-36f8cdfb3365",
98+
"summary": {
99+
"status": "READY",
100+
"errors": [],
101+
"variableStates": [
102+
{
103+
"variable": "series_0",
104+
"filledNARatio": 0.0,
105+
"effectiveCount": 721,
106+
"firstTimestamp": "2021-01-02T12:00:00Z",
107+
"lastTimestamp": "2021-01-03T00:00:00Z"
108+
},
109+
{
110+
"variable": "series_1",
111+
"filledNARatio": 0.0,
112+
"effectiveCount": 721,
113+
"firstTimestamp": "2021-01-02T12:00:00Z",
114+
"lastTimestamp": "2021-01-03T00:00:00Z"
115+
},
116+
{
117+
"variable": "series_2",
118+
"filledNARatio": 0.0,
119+
"effectiveCount": 721,
120+
"firstTimestamp": "2021-01-02T12:00:00Z",
121+
"lastTimestamp": "2021-01-03T00:00:00Z"
122+
},
123+
{
124+
"variable": "series_3",
125+
"filledNARatio": 0.0,
126+
"effectiveCount": 721,
127+
"firstTimestamp": "2021-01-02T12:00:00Z",
128+
"lastTimestamp": "2021-01-03T00:00:00Z"
129+
},
130+
{
131+
"variable": "series_4",
132+
"filledNARatio": 0.0,
133+
"effectiveCount": 721,
134+
"firstTimestamp": "2021-01-02T12:00:00Z",
135+
"lastTimestamp": "2021-01-03T00:00:00Z"
136+
}
137+
],
138+
"setupInfo": {
139+
"dataSource": "https://mvaddataset.blob.core.windows.net/sample-onetable/sample_data_5_3000.csv",
140+
"topContributorCount": 3,
141+
"startTime": "2021-01-02T12:00:00Z",
142+
"endTime": "2021-01-03T00:00:00Z"
143+
}
144+
},
145+
"results": [
146+
{
147+
"timestamp": "2021-01-02T12:00:00Z",
148+
"value": {
149+
"isAnomaly": false,
150+
"severity": 0.0,
151+
"score": 0.3377174139022827,
152+
"interpretation": []
153+
},
154+
"errors": []
155+
},
156+
{
157+
"timestamp": "2021-01-02T12:01:00Z",
158+
"value": {
159+
"isAnomaly": false,
160+
"severity": 0.0,
161+
"score": 0.24631972312927247,
162+
"interpretation": []
163+
},
164+
"errors": []
165+
},
166+
{
167+
"timestamp": "2021-01-02T12:02:00Z",
168+
"value": {
169+
"isAnomaly": false,
170+
"severity": 0.0,
171+
"score": 0.16678125858306886,
172+
"interpretation": []
173+
},
174+
"errors": []
175+
},
176+
{
177+
"timestamp": "2021-01-02T12:03:00Z",
178+
"value": {
179+
"isAnomaly": false,
180+
"severity": 0.0,
181+
"score": 0.23783254623413086,
182+
"interpretation": []
183+
},
184+
"errors": []
185+
},
186+
{
187+
"timestamp": "2021-01-02T12:04:00Z",
188+
"value": {
189+
"isAnomaly": false,
190+
"severity": 0.0,
191+
"score": 0.24804904460906982,
192+
"interpretation": []
193+
},
194+
"errors": []
195+
},
196+
{
197+
"timestamp": "2021-01-02T12:05:00Z",
198+
"value": {
199+
"isAnomaly": false,
200+
"severity": 0.0,
201+
"score": 0.11487171649932862,
202+
"interpretation": []
203+
},
204+
"errors": []
205+
},
206+
{
207+
"timestamp": "2021-01-02T12:06:00Z",
208+
"value": {
209+
"isAnomaly": true,
210+
"severity": 0.32980116622958083,
211+
"score": 0.5666913509368896,
212+
"interpretation": [
213+
{
214+
"variable": "series_2",
215+
"contributionScore": 0.4130149677604554,
216+
"correlationChanges": {
217+
"changedVariables": [
218+
"series_0",
219+
"series_4",
220+
"series_3"
221+
]
222+
}
223+
},
224+
{
225+
"variable": "series_3",
226+
"contributionScore": 0.2993065960239115,
227+
"correlationChanges": {
228+
"changedVariables": [
229+
"series_0",
230+
"series_4",
231+
"series_3"
232+
]
233+
}
234+
},
235+
{
236+
"variable": "series_1",
237+
"contributionScore": 0.287678436215633,
238+
"correlationChanges": {
239+
"changedVariables": [
240+
"series_0",
241+
"series_4",
242+
"series_3"
243+
]
244+
}
245+
}
246+
]
247+
},
248+
"errors": []
249+
}
250+
]
251+
}
252+
```
253+
254+
The response contains the result status, variable information, inference parameters, and inference results.
255+
256+
* **variableStates**: This lists the information of each variable in the inference request.
257+
* **setupInfo**: This is the request body submitted for this inference.
258+
* **results**: This contains the detection results. There are three typical types of detection results.
259+
260+
* Error code `InsufficientHistoricalData`. This usually happens only with the first few timestamps because the model inferences data in a window-based manner and it needs historical data to make a decision. For the first few timestamps, there's insufficient historical data, so inference can't be performed on them. In this case, the error message can be ignored.
261+
262+
* **isAnomaly**: `false` indicates the current timestamp isn't an anomaly.`true` indicates an anomaly at the current timestamp.
263+
* `severity` indicates the relative severity of the anomaly and for abnormal data it's always greater than 0.
264+
* `score` is the raw output of the model on which the model makes a decision. `severity` is a derived value from `score`. Every data point has a `score`.
265+
266+
* **interpretation**: This field only appears when a timestamp is detected as anomalous, which contains `variables`, `contributionScore`, `correlationChanges`.
267+
268+
* **contributionScore**: This is the contribution score of each variable. Higher contribution scores indicate a higher possibility of the root cause. This list is often used for interpreting anomalies and diagnosing the root causes.
269+
270+
* **correlationChanges**: This field only appears when a timestamp is detected as abnormal, which is included in the interpretation. It contains `changedVariables` and `changedValues` that interpret which correlations between variables changed.
271+
272+
* **changedVariables**: This field will show which variables that have a significant change in correlation with `variable`. The variables in this list are ranked by the extent of correlation changes.
273+
274+
> [!NOTE]
275+
> A common pitfall is taking all data points with `isAnomaly`=`true` as anomalies. That may end up with too many false positives.
276+
> You should use both `isAnomaly` and `severity` (or `score`) to sift out anomalies that are not severe and (optionally) use grouping to check the duration of the anomalies to suppress random noise.
277+
> Please refer to the [FAQ](../concepts/best-practices-multivariate.md#faq) in the best practices document for the difference between `severity` and `score`.
278+
279+
## Next steps
280+
281+
* [Best practices of multivariate anomaly detection](../concepts/best-practices-multivariate.md)
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
title: Create an Anomaly Detector resource
3+
titleSuffix: Azure AI services
4+
description: Create an Anomaly Detector resource
5+
#services: cognitive-services
6+
author: mrbullwinkle
7+
manager: nitinme
8+
ms.service: azure-ai-anomaly-detector
9+
ms.topic: conceptual
10+
ms.date: 01/18/2024
11+
ms.author: mbullwin
12+
---
13+
14+
15+
# Create and Anomaly Detector resource
16+
17+
[!INCLUDE [Deprecation announcement](../includes/deprecation.md)]
18+
19+
Anomaly Detector service is a cloud-based Azure AI service that uses machine-learning models to detect anomalies in your time series data. Here, you'll learn how to create an Anomaly Detector resource in the Azure portal.
20+
21+
## Create an Anomaly Detector resource in Azure portal
22+
23+
1. Create an Azure subscription if you don't have one - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
24+
1. Once you have your Azure subscription, [create an Anomaly Detector resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector) in the Azure portal, and fill out the following fields:
25+
26+
- **Subscription**: Select your current subscription.
27+
- **Resource group**: The [Azure resource group](/azure/cloud-adoption-framework/govern/resource-consistency/resource-access-management#what-is-an-azure-resource-group) that will contain your resource. You can create a new group or add it to a pre-existing group.
28+
- **Region**: Select your local region, see supported [Regions](../regions.md).
29+
- **Name**: Enter a name for your resource. We recommend using a descriptive name, for example *multivariate-msft-test*.
30+
- **Pricing tier**: The cost of your resource depends on the pricing tier you choose and your usage. For more information, see [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/anomaly-detector/). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
31+
32+
> [!div class="mx-imgBorder"]
33+
> ![Screenshot of create a resource user experience](../media/create-resource/create-resource.png)
34+
35+
1. Select **Identity** in the banner above and make sure you set the status as **On** which enables Anomaly Detector to visit your data in Azure in a secure way, then select **Review + create.**
36+
37+
> [!div class="mx-imgBorder"]
38+
> ![Screenshot of enable managed identity](../media/create-resource/enable-managed-identity.png)
39+
40+
1. Wait a few seconds until validation passed, and select **Create** button from the bottom-left corner.
41+
1. After you select create, you'll be redirected to a new page that says Deployment in progress. After a few seconds, you'll see a message that says, Your deployment is complete, then select **Go to resource**.
42+
43+
## Get Endpoint URL and keys
44+
45+
In your resource, select **Keys and Endpoint** on the left navigation bar, copy the **key** (both key1 and key2 will work) and **endpoint** values from your Anomaly Detector resource.. You'll need the key and endpoint values to connect your application to the Anomaly Detector API.
46+
47+
> [!div class="mx-imgBorder"]
48+
> ![Screenshot of copy key and endpoint user experience](../media/create-resource/copy-key-endpoint.png)
49+
50+
That's it! You could start preparing your data for further steps!
51+
52+
## Next steps
53+
54+
* [Join us to get more supports!](https://aka.ms/adadvisorsjoin)
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
---
2+
title: Run Anomaly Detector Container in Azure Container Instances
3+
titleSuffix: Azure AI services
4+
description: Deploy the Anomaly Detector container to an Azure Container Instance, and test it in a web browser.
5+
#services: cognitive-services
6+
author: mrbullwinkle
7+
manager: nitinme
8+
ms.service: azure-ai-anomaly-detector
9+
ms.custom: devx-track-azurecli
10+
ms.topic: how-to
11+
ms.date: 01/18/2024
12+
ms.author: mbullwin
13+
---
14+
15+
# Deploy an Anomaly Detector univariate container to Azure Container Instances
16+
17+
[!INCLUDE [Deprecation announcement](../includes/deprecation.md)]
18+
19+
Learn how to deploy the Azure AI services [Anomaly Detector](../anomaly-detector-container-howto.md) container to Azure [Container Instances](/azure/container-instances/). This procedure demonstrates the creation of an Anomaly Detector resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers' attention away from managing infrastructure to instead focusing on application development.
20+
21+
[!INCLUDE [Prerequisites](../../containers/includes/container-preview-prerequisites.md)]
22+
23+
[!INCLUDE [Create an Azure AI Anomaly Detector resource](../includes/create-anomaly-detector-resource.md)]
24+
25+
[!INCLUDE [Create an Anomaly Detector container on Azure Container Instances](../../containers/includes/create-container-instances-resource-from-azure-cli.md)]
26+
27+
[!INCLUDE [API documentation](../../includes/cognitive-services-containers-api-documentation.md)]
28+
29+
## Next steps
30+
31+
* Review [Install and run containers](../anomaly-detector-container-configuration.md) for pulling the container image and run the container
32+
* Review [Configure containers](../anomaly-detector-container-configuration.md) for configuration settings
33+
* [Learn more about Anomaly Detector API service](https://go.microsoft.com/fwlink/?linkid=2080698&clcid=0x409)

0 commit comments

Comments
 (0)