You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/data-explorer/anomaly-detection.md
+10-2Lines changed: 10 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Time series anomaly detection and forecasting in Azure Data Explorer
3
3
description: Learn how to analyze time series data for anomaly detection and forecasting using Azure Data Explorer.
4
4
author: orspod
5
5
ms.author: orspodek
6
-
ms.reviewer: jasonh
6
+
ms.reviewer: adieldar
7
7
ms.service: data-explorer
8
8
ms.topic: conceptual
9
9
ms.date: 04/24/2019
@@ -26,6 +26,8 @@ To create a decomposition model, use the function [`series_decompose()`](/azure/
26
26
27
27
For example, you can decompose traffic of an internal web service by using the following query:
28
28
29
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA3WQ3WrDMAyF7/sUukvCnDXJGIOVPEULuwxqoixm/gm2+jf28JObFjbYrmyho3M+yRCD1a5jaGFAJtaW8qaqX8qqLqvnYrMySYHnvxRNWT1B07xW1U03JFEzbVYDWd9Z/KAuUtAUm9UXpLJcSnAH2+LxPZe3AO9gJ6ZbRjvDGLy9EbG/BUemOXnvLxD1AOJ1mijQtWhbyHbbOgOA9RogkqGeAaXn3g1BooVb6OiDNHpD6CjAUccDGv2JrL0TSzozuQHyPYqHdqRkDKN3aBRwkJaCQJIoQ4VsuXh2A/Xezj5SWkVBWSvI0vSoOSsWpLtEpyDwY4KTW8nnJ5ws+2+eAhSyOxjkd+HDVVcIfHplp2TYTxgYTpqnnDUbarM32gPO86PY4jjqfmGw3vGkftNlCi5xNprbWW5kYvENQQnqDh8CAAA=)**\]**
30
+
29
31
```kusto
30
32
let min_t = datetime(2017-01-05);
31
33
let max_t = datetime(2017-02-03 22:00);
@@ -51,6 +53,8 @@ The function [`series_decompose_anomalies()`](/azure/kusto/query/series-decompos
51
53
52
54
The following query allows you to detect anomalies in internal web service traffic:
53
55
56
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA3WR3W7CMAyF73mKI25KpRbaTmjSUJ8CpF1WoXVptPxUifmb9vBLoGO7GFeR7ePv2I4ihpamYdToBBNLTYuqKF/zosyLdbqZqagQl/8UVV68oKreimLSdVFUDZtZR9o2WnxQ48lJ8tXsCzHM7yHMUdfidFiEN4U12AXoloUe0Turp4nYTsaeaYzs/RVedgis80CObkFdI9ltywTAagV4UtQyRKiZgyLEaTGZ9taFQqtIGHI4SX8USn4KltYEJF2YTIeFMFaHPPkMvrWOMuxFoEpDaVjujmo6aq0erafmIY+7ZCiX6wx5mSGJHb3kJA1sF8jB8q69toNwjLPkYfGTseqoja//eLNkRXXyTnuIcVyCneh72cL2YQdtDQ8ZHvIkDcsfPWH+3AvPvObx0FMXD/RLhfDYW9VhtNKwj/8U69M1b2S//AbRUQMWQQIAAA==)**\]**
57
+
54
58
```kusto
55
59
let min_t = datetime(2017-01-05);
56
60
let max_t = datetime(2017-02-03 22:00);
@@ -74,6 +78,8 @@ The function [`series_decompose_forecast()`](/azure/kusto/query/series-decompose
74
78
75
79
The following query allows you to predict next week's web service traffic:
76
80
81
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA22QzW6DMBCE73mKuQFqKISqitSIW98gkXpEDl5iK9hG9uanUR++dqE99YRGO8x845EYRtuO0UIKJtaG8qbebMt6U9avxW41Joe4/+doyvoFTfNW14tPJlOjZqGc1w9n263crSQZ1xlxpi6Q1xSa1ReSLGcJezGtuJ7y+C3gLA6xZM/CTBi8MwshuxnkaUlGYJpS5/ETQUvEzJsiTz+ibZEd9psMQFUBgUbqGSLe7GkkpBVYygfn46EfSVjyuOpwEaN+CNbOxki6M1mZTNSLkAbOv3WSemcmF6j7vSX8dcTUlvOFsZJcFDHFx4wYnmp7JTzjplnlrHmkNvugI8Q0PYO9GAbdww0RyDjLav1XHLnBimAjEG5E5zQ7vRP284x36hOOTtxZ8Q3The8P2QEAAA==)**\]**
82
+
77
83
```kusto
78
84
let min_t = datetime(2017-01-05);
79
85
let max_t = datetime(2017-02-03 22:00);
@@ -83,7 +89,7 @@ demo_make_series2
83
89
| make-series num=avg(num) on TimeStamp from min_t to max_t+horizon step dt by sid
84
90
| where sid == 'TS1' // select a single time series for a cleaner visualization
| render timechart with(title='Web app. traffic of a month, forecasting the next week by Time Series Decmposition')
92
+
| render timechart with(title='Web app. traffic of a month, forecasting the next week by Time Series Decomposition')
87
93
```
88
94
89
95

@@ -97,6 +103,8 @@ Azure Data Explorer query language syntax enables a single call to process multi
97
103
98
104
The following query shows the processing of three time series simultaneously:
99
105
106
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA21Qy26DMBC85yvmFlChcUirSI34ikTqETl4KVawjfDmqX587UCaHuqLtePxPLYjhtG2YpRQkom1oaQQy3Uulrl4TzezLjLk5T9GkYsViuJDiImnIqlox6F1g745W67VZqbIuMrIA1WeBk2+mH0jjvk4wh5NKU9fSbhTOItdMNmyND2awZkpIbsxyMukDM/UR8/9FV6rIEkXJqvgmsYTl7X0lISHspzvtqt5hjdxPxkeYBHA4gGKFMBiAUilIAfWja617CY1NG4ASX/FSfuj7PRNsg4ZXANz7Fj3HSGuBmOjZ5hYbcSqIBwbZpNk+iQFcQpx4/omrqLamd55qh5v41d22nIybWChOI0qQ9Cg4e5ftyE6zprbhDV3VM4/aQ/Z96/gQTahU4wsYZzlNvs11vYL3BJsCIQz0eHed/W30jz9AUEBI0ktAgAA)**\]**
Copy file name to clipboardExpand all lines: articles/data-explorer/machine-learning-clustering.md
+17-1Lines changed: 17 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Machine learning capability in Azure Data Explorer
3
3
description: Use machine learning clustering for Root Cause Analysis in Azure Data Explorer.
4
4
author: orspod
5
5
ms.author: orspodek
6
-
ms.reviewer: jasonh
6
+
ms.reviewer: adieldar
7
7
ms.service: data-explorer
8
8
ms.topic: conceptual
9
9
ms.date: 04/29/2019
@@ -21,6 +21,8 @@ Azure Data Explorer has three Machine Learning plugins: [`autocluster`](/azure/k
21
21
22
22
A common scenario includes a data set selected by a specific criteria such as time window that exhibits anomalous behavior, high temperature device readings, long duration commands, and top spending users. We would like a simple and fast way to find common patterns (segments) in the data. Patterns are a subset of the data set whose records share the same values over multiple dimensions (categorical columns). The following query builds and shows a time series of service exceptions over a week in ten-minute bins:
23
23
24
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5XPsaoCQQyF4d6nCFa7oHCtZd9B0F6G8ajByWTJZHS5+PDOgpVgYRn485EkOAnno9NAriWGFKw7QfQYUy0O43zZ0JNKFQnG/5jrbmeIXHBgwd6DjH2/JVqk2QrTL1aYvlifa4tni29YlzaiUK4yRK3Zu54006dBZ1N5/+X6PqpRI23+pFGGfIKRtz5egzk92K+dsycMyz3szhGEKWJ01lxI760O9ABuq0bMcvV2hqFoqnOz7F9BdSHlSgEAAA==)**\]**
25
+
24
26
```kusto
25
27
let min_t = toscalar(demo_clustering1 | summarize min(PreciseTimeStamp));
26
28
let max_t = toscalar(demo_clustering1 | summarize max(PreciseTimeStamp));
@@ -35,6 +37,8 @@ The service exception count correlates with the overall service traffic. You can
35
37
36
38
The second spike in the data occurs on Tuesday afternoon. The following query is used to further diagnose this spike. Use the query to redraw the chart around the spike in higher resolution (eight hours in one-minute bins) to verify whether it’s a sharp spike, and view its borders.
37
39
40
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAAyXNwQrCMBAE0Hu/YvHUooWkghSl/yDoyUsJyWpCk2xJNnjx403pbeYwbzwyBBdnnoxiZBewHYS89GLshzNIeRWiuzUGA83al8yYXPzI5gdBLdjnWjFDLGHSVCK3HVCEe0LtMj4r9mAVVngnCvsLMO3hOFqo2goyVCxhNJhgu9dWJYavY9uyY4/T4UV1XVm2CEM0kFe34AnkBhXGOs7kCzuKh+4P3/XM5M8AAAA=)**\]**
41
+
38
42
```kusto
39
43
let min_t=datetime(2016-08-23 11:00);
40
44
demo_clustering1
@@ -46,6 +50,8 @@ demo_clustering1
46
50
47
51
We see a narrow two-minute spike from 15:00 to 15:02. In the following query, count the exceptions in this two-minute window:
48
52
53
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA8tJLVHIzcyLL0hNzI4vsU1JLEktycxN1TAyMDTTNbDQNTJWMDS1MjDQtObKASlNrCCk1AioNCU1Nz8+Oae0uCS1KDMv3ZCrRqE8I7UoVSGgKDU5szg1BKgvuCQxt0AhKbWkPDU1TwPhBj09hCWaQI3J+aV5JQACnQoRpwAAAA==)**\]**
54
+
49
55
```kusto
50
56
let min_peak_t=datetime(2016-08-23 15:00);
51
57
let max_peak_t=datetime(2016-08-23 15:02);
@@ -60,6 +66,8 @@ demo_clustering1
60
66
61
67
In the following query, sample 20 exceptions out of 972:
62
68
69
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA4XOsQrCMBSF4b1Pccd2aLmJKKL4DoLu4doeNDSJJb1SBx/eOHV0/37OCVCKPrkJMjo9DaJQH1FbNruW963dkNkemJtjFX5U3v+oLXRAfLo+vGZF9uluqg8tD2TQOaP3M66lu6jEiW7QBUj1+qHr1pGmhCojyPIX7QHvzakAAAA=)**\]**
70
+
63
71
```kusto
64
72
let min_peak_t=datetime(2016-08-23 15:00);
65
73
let max_peak_t=datetime(2016-08-23 15:02);
@@ -95,6 +103,8 @@ demo_clustering1
95
103
96
104
Even though there are less than a thousand exceptions, it’s still hard to find common segments, as there are multiple values in each column. You can use [`autocluster()`](/azure/kusto/query/autoclusterplugin) plugin to instantly extract a small list of common segments and find the interesting clusters within the spike's two minutes as seen in the following query:
97
105
106
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA4WOsQrCMBRF937FG5OhJYkoovQfBN1DbC8aTNqSvlgHP94IQkf3c+65AUzRD3aCe1hue8dgHyGM0rta7WuzIb09KCWPVfii7vUPNQXtEUfbhTwzkh9uunrTckcCnRI6P+NSvDO7ONEVvACDWD80zRqRRcTThVxa5DKPv00hP81KL1+4AAAA)**\]**
107
+
98
108
```kusto
99
109
let min_peak_t=datetime(2016-08-23 15:00);
100
110
let max_peak_t=datetime(2016-08-23 15:02);
@@ -119,6 +129,8 @@ Autocluster uses a proprietary algorithm for mining multiple dimensions and extr
119
129
120
130
You can also use the [`basket()`](/azure/kusto/query/basketplugin) plugin as seen in the following query:
121
131
132
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA4WOsQ6CMBgGd57iH9sB0tZojMZ3MNG9KfBFG1og7Y84+PDWidH9LncBTNGPdoYbLF96x2AfIYzSh1oda7MjvT8pJc9V+KHu/Q81Be0RJ9uFJTOSHx+6+tD6RAJdEzqfcS/ejV2cqQWvwCi2h6bZIrKIeLmwlBa1Lg9gIb9KJv2TswAAAA==)**\]**
133
+
122
134
```kusto
123
135
let min_peak_t=datetime(2016-08-23 15:00);
124
136
let max_peak_t=datetime(2016-08-23 15:02);
@@ -153,6 +165,8 @@ The [`diffpatterns()`](/azure/kusto/query/diffpatternsplugin) plugin overcomes t
153
165
154
166
In the query below, we use `diffpatterns` to find interesting clusters within the spike's two minutes, which are different than clusters within the baseline. We define the baseline window as the eight minutes before 15:00 (when the spike started). We also need to extend by a binary column (AB) specifying whether a specific record belongs to the baseline or to the anomalous set. `Diffpatterns` implements a supervised learning algorithm, where the two class labels were generated by the anomalous versus the baseline flag (AB).
155
167
168
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA42QzU+DQBDF7/wVcwOi5UtrmhJM4OzBRO9kWqbtpssuYacfGv94t0CrxFTd02by5jfvPUkMtVBlQ7gtOauQiUVNXhLFD5NoNknuIJ7Oo8hPHXmS4vEvaXKWWuoCDUmh6Jr8fj79Tv6HfOanEIbwRLgnQFhjAwviA5EC3hCcCYCq6gamEVsC1oB7LfoRt6iMYKEVvGtFQXfeNFKc7mXe2MjNVzl+mARR6lRU63Ipd4apFWodOx9w2FBL4D23tBSGXi3mhbG+OPPGVQTB+ITvg24dGN7vlN5JTxhc+dYAHZls4LzIxGr1k/B4iXcLbq50jfLNtd9i8OB2jD3KnW0dKstokG08Zby8uLbyCfX/tG46AgAA)**\]**
169
+
156
170
```kusto
157
171
let min_peak_t=datetime(2016-08-23 15:00);
158
172
let max_peak_t=datetime(2016-08-23 15:02);
@@ -178,6 +192,8 @@ demo_clustering1
178
192
179
193
The most dominant segment is the same segment that was extracted by `autocluster`, its coverage on the two-minute anomalous window is also 65.74%. But its coverage on the eight-minute baseline window is only 1.7%. The difference is 64.04%. This difference seems to be related to the anomalous spike. You can verify this assumption by splitting the original chart into the records belonging to this problematic segment versus the other segments as seen in the query below:
180
194
195
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WRsWrDMBCG9zzF4cmGGuJUjh2Ktw7tUkLTzuEsnRNRnRQkuSQlD185yRTo0EWIO913/J8MRWBttxE6iC5INOhzRey20owhktd2V8EZwsiMXv/Q9Dpfe5I60Idm2kTkQ1E8AczMxMLjf1h4/IN1PzY7Ax0jWQWBdomvhyF/p512FroOMsIxA0zdTdpKn1bHSzmMzbX8TAfjTkw2vqpLp69VpYQaatEogXOBsqrbtl5WDake6yabXWjkv7WkFxeuPGqG5VzWqhQrIUqx6B/L1WKB6aBViy01imT2ANnau94QT9c35xlNVqQAjF9UhpSHAtiRO+lGG/MCUoZ7CTB4x7ePie5mNbk4QDVn6E+ThUT0SQh5iGlM7tHHX4WFgLHOAQAA)**\]**
196
+
181
197
```kusto
182
198
let min_t = toscalar(demo_clustering1 | summarize min(PreciseTimeStamp));
183
199
let max_t = toscalar(demo_clustering1 | summarize max(PreciseTimeStamp));
Copy file name to clipboardExpand all lines: articles/data-explorer/time-series-analysis.md
+27-1Lines changed: 27 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ title: Analyze time series data using Azure Data Explorer
3
3
description: Learn how to analyze time series data in the cloud using Azure Data Explorer.
4
4
author: orspod
5
5
ms.author: orspodek
6
-
ms.reviewer: mblythe
6
+
ms.reviewer: adieldar
7
7
ms.service: data-explorer
8
8
ms.topic: conceptual
9
9
ms.date: 04/07/2019
@@ -22,6 +22,8 @@ The first step in time series analysis is to partition and transform the origina
22
22
23
23
The input table *demo_make_series1* contains 600K records of arbitrary web service traffic. Use the command below to sample 10 records:
24
24
25
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2Pz03MTo0vTi3KTC02VKhRKAFyFQwNADOyzKUbAAAA)**\]**
26
+
25
27
```kusto
26
28
demo_make_series1 | take 10
27
29
```
@@ -44,6 +46,8 @@ The resulting table contains a timestamp column, three contextual dimensions col
44
46
45
47
Since there are no metrics, we can only build a set of time series representing the traffic count itself, partitioned by OS using the following query:
46
48
49
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5XPwQrCMBAE0Hu/Yo4NVLBn6Td4ULyWtV1tMJtIsoEq/XhbC4J48jgw+5h1rBDrW0UDDakjR7HsWUIrdOM2cbScakxIWYSiffJSL49W+KAkd2N2hVsMGv8yaPw2furFhCVu1gifpelC9loa9Hyh7LTZInh8FFiPSP7K5fufap1UoR4Mzg/s04njjEb2PUfofNYNFPUFtJiguAEBAAA=)**\]**
50
+
47
51
```kusto
48
52
let min_t = toscalar(demo_make_series1 | summarize min(TimeStamp));
49
53
let max_t = toscalar(demo_make_series1 | summarize max(TimeStamp));
@@ -76,6 +80,8 @@ Filtering is a common practice in signal processing and useful for time series p
76
80
-[`series_iir()`](/azure/kusto/query/series-iirfunction): Applying IIR filter. Used for exponential smoothing and cumulative sum.
77
81
-`Extend` the time series set by adding a new moving average series of size 5 bins (named *ma_num*) to the query:
78
82
83
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WPQavCMBCE7/6KOSYQ4fXgSfobPDx517C2q4bXpLLZQBV/vKkFQTx5WRh25tvZgRUxJK9ooWPuaCAxPcfRR/pnn1kC5wZ35BIjSbjxbDf7EPlXKV6s3a6GmUHTVwya3hkf9tUds1wvEqnEthtLUmPR85HKoO0PxoQXBSFBKJ3YPP9xSyWH5mxxuGKX/1gqlCfl1Neln5EL3R+DmCodhC9MahqHjXVQKbxMW5NScyzQerA7k+gDa1tswzsBAAA=)**\]**
84
+
79
85
```kusto
80
86
let min_t = toscalar(demo_make_series1 | summarize min(TimeStamp));
81
87
let max_t = toscalar(demo_make_series1 | summarize max(TimeStamp));
@@ -95,6 +101,8 @@ ADX supports segmented linear regression analysis to estimate the trend of the t
95
101
96
102
Example of `series_fit_line()` and `series_fit_2lines()` functions in a time series query:
97
103
104
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2PL04tykwtNuKqUUitKEnNS1GACMSnZZbEG+Vk5qUWa1Rq6iCLggSBYkAdRUD1qUUKIIHkjMSiEoXyzJIMjYrk/JzS3DzbCk0AUIIJ02EAAAA=)**\]**
105
+
98
106
```kusto
99
107
demo_series2
100
108
| extend series_fit_2lines(y), series_fit_line(y)
@@ -116,6 +124,8 @@ Many metrics follow seasonal (periodic) patterns. User traffic of cloud services
116
124
117
125
The following example applies seasonality detection on one month traffic of a web service (2-hour bins):
118
126
127
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2PL04tykwtNuaqUShKzUtJLVIoycxNTc5ILCoBAHrjE80fAAAA)**\]**
128
+
119
129
```kusto
120
130
demo_series3
121
131
| render timechart
@@ -129,6 +139,8 @@ demo_series3
129
139
> [!NOTE]
130
140
> It's an anomaly if specific distinct periods don't exist
131
141
142
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA12OwQ6CMBBE737FHKmpVtAr39IguwkYyzZ0IZj48TZSLx533szOEAfxieeR0/XwRpzlwb2iilkSShapl5mTQYvd5QvxxJqd1bQEi8vZor6RawaLxsA5FewcOjBKBOP0PXUMXL7lyrCeeIvdRPjrzIw35Qyoe6W2GY4qJMv9yb91xtX0AS7N323BAAAA)**\]**
143
+
132
144
```kusto
133
145
demo_series3
134
146
| project (periods, scores) = series_periods_detect(num, 0., 14d/2h, 2) //to detect the periods in the time series
@@ -148,6 +160,8 @@ The function detects daily and weekly seasonality. The daily scores less than th
148
160
149
161
Arithmetic and logical operations can be done on a time series. Using [series_subtract()](/azure/kusto/query/series-subtractfunction) we can calculate a residual time series, that is, the difference between original raw metric and a smoothed one, and look for anomalies in the residual signal:
150
162
163
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WQQU/DMAyF7/sVT5waqWjrgRPqb+AAgmPltR6LSNLJcdhA+/G4izRAnLhEerbfl2cHVkSfBkUPnfNIgaSZOM5DpDceMovn3OGMXGIk8Z+8jDdPPvKjUjw4d78KC4NO/2LQ6Tfjz/jqjEXeVolUYj/OJWnjMPGOStB+gznhSoFPEEqv3Fz2aWukFt3eYfuBh/zMYlA+KafJmsOCrPRh56Ux2UL4wKRN1+LOtVApXF/37RTOfioUfvpz2arQqBVS2Q7rtc6wa4wlkPLVCLXIqE7DHvcsXOOh73Hz4tM0HzO6zQ1gDOx8UOvZrtayst0Y7z4babkkYQxMyQbGPYnCiGIxTS/fXGpfwk+n7uQBAAA=)**\]**
164
+
151
165
```kusto
152
166
let min_t = toscalar(demo_make_series1 | summarize min(TimeStamp));
153
167
let max_t = toscalar(demo_make_series1 | summarize max(TimeStamp));
@@ -169,6 +183,8 @@ demo_make_series1
169
183
170
184
The example below shows how these functions can run at scale on thousands of time series in seconds for anomaly detection. To see a few sample telemetry records of a DB service's read count metric over four days run the following query:
171
185
186
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2Pz03Mq4wvTi3KTC025KpRKEnMTlUwAQArfAiiGgAAAA==)**\]**
187
+
172
188
```kusto
173
189
demo_many_series1
174
190
| take 4
@@ -184,6 +200,8 @@ demo_many_series1
184
200
185
201
And simple statistics:
186
202
203
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2Pz03Mq4wvTi3KTC025KpRKC7NzU0syqxKVcgrzbVNzi/NK9HQ1FHIzcyLL7EFkhohnr6uwSGOvgEg0cQKkGhiBZIoAEq2dK9VAAAA)**\]**
Building a time series in 1-hour bins of the read metric (total four days * 24 hours = 96 points), results in normal pattern fluctuation:
198
216
217
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WPMQvCMBSE9/6KGxOoYGfpIOjgUBDtXh7twwabFF6ittIfb2rBQSfHg+8+7joOsMZVATlC72vqSFTDtq8subHyLIZ9hgn+Zi2JefKMq/JQ7M/ltjhqvQGSbrbQ8JeFhm/LTyGZInbl1RIhTI3P6X5ROwp0ikmjd/hYYByE3IXV+1G6TEqRtTqahF3DgmAs1y1JwMOEVo0Rzdf6BbBH5FAHAQAA)**\]**
218
+
199
219
```kusto
200
220
let min_t = toscalar(demo_many_series1 | summarize min(TIMESTAMP));
201
221
let max_t = toscalar(demo_many_series1 | summarize max(TIMESTAMP));
@@ -210,6 +230,8 @@ The above behavior is misleading, since the single normal time series is aggrega
210
230
211
231
How many time series can we create?
212
232
233
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA0tJzc2Pz03Mq4wvTi3KTC025KpRKC7NzU0syqxKVUiqVPDJT9ZR8C/QUXBxAkol55fmlQAAWEsFxjQAAAA=)**\]**
234
+
213
235
```kusto
214
236
demo_many_series1
215
237
| summarize by Loc, Op, DB
@@ -223,6 +245,8 @@ demo_many_series1
223
245
224
246
Now, we're going to create a set of 18339 time series of the read count metric. We add the `by` clause to the make-series statement, apply linear regression, and select the top two time series that had the most significant decreasing trend:
225
247
248
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WPsU7DQBBE+3zFdLmTTGHSgFAKUCiQiIKIe2u5rJ0T9l3YWwcH5eO5JBIFVJSzmnmz07Gi96FWzKExOepIzIb7WPcUDnVi8ZxKHJGGvifxX3yym+pp+biu7pcv1t4Bk+5EofFfFBp/U/4EJsdse+eri4QwbdKc9q1ZkNJrVhYx4IcCHyAUWjbnRcXlpQLl1uLtgOfoCqx2BRYPGcyjctjASPoYSLhA6uKObR5waasbr3XnA5tzrc0RjTtcn0hnKyg55KtkDAvU9+y2JIpPr1ujXjueT9cse+8YlVDTeIfVoNQymiiZ5ENSCi4vM3FQxAblzWx2a6f2G2UcBRyWAQAA)**\]**
249
+
226
250
```kusto
227
251
let min_t = toscalar(demo_many_series1 | summarize min(TIMESTAMP));
228
252
let max_t = toscalar(demo_many_series1 | summarize max(TIMESTAMP));
@@ -237,6 +261,8 @@ demo_many_series1
237
261
238
262
Display the instances:
239
263
264
+
**\[**[**Click to run query**](https://dataexplorer.azure.com/clusters/help/databases/Samples?query=H4sIAAAAAAAAA5WPvW4CMRCEe55iSlsyBWkjChApIoESAb21udsQg38O26AD8fDx3SEUJVXKWc18s2M5wxmvM6bIIVVkKYqaXdCO/EUnjobTBDekk3MUzZU7u9i+rl4229nqXcpnYGQ7CrX/olD7m/InMLoV24HHg0RkqtOUzjuxoEzroiSCx4MC4xHJ71j0i9TwksLkS+LjgmWoFN4ahcW8gLnN7GuImI4niqyQbGhYlgFDm/40WVvjWfS1skRyaPDUkXorKFXl2MSw5yr/pN9Z31SyxuhbAQAA)**\]**
265
+
240
266
```kusto
241
267
let min_t = toscalar(demo_many_series1 | summarize min(TIMESTAMP));
242
268
let max_t = toscalar(demo_many_series1 | summarize max(TIMESTAMP));
0 commit comments