You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/time-series-insights/time-series-insights-update-storage-ingress.md
+80-41Lines changed: 80 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,114 +8,152 @@ ms.workload: big-data
8
8
ms.service: time-series-insights
9
9
services: time-series-insights
10
10
ms.topic: conceptual
11
-
ms.date: 12/31/2019
11
+
ms.date: 02/07/2020
12
12
ms.custom: seodec18
13
13
---
14
14
15
15
# Data storage and ingress in Azure Time Series Insights Preview
16
16
17
-
This article describes updates to data storage and ingress for Azure Time Series Insights Preview. It covers the underlying storage structure, file format, and Time Series ID property. It also discusses the underlying ingress process, best practices, and current preview limitations.
17
+
This article describes updates to data storage and ingress for Azure Time Series Insights Preview. It describes the underlying storage structure, file format, and Time Series ID property. It also discusses the underlying ingress process, best practices, and current preview limitations.
18
18
19
19
## Data ingress
20
20
21
-
Your Azure Time Series Insights environment contains an Ingestion Engine to collect, process, and store time-series data. When planning your environment, there are some considerations to take into account in order to ensure that all incoming data is processed, and to achieve high ingress scale and minimize ingestion latency (the time taken by TSI to read and process data from the event source).
21
+
Your Azure Time Series Insights environment contains an *ingestion engine* to collect, process, and store time-series data.
22
22
23
-
In Time Series Insights Preview, data ingress policies determine where data can be sourced from and what format the data should have.
23
+
There are some considerations to take into account to ensure all incoming data is processed, to achieve high ingress scale, and minimize ingestion latency (the time taken by Time Series Insights to read and process data from the event source) when [planning your environment](time-series-insights-update-plan.md).
24
+
25
+
Time Series Insights Preview data ingress policies determine where data can be sourced from and what format the data should have.
24
26
25
27
### Ingress policies
26
28
29
+
Data ingress involves how data is sent to an Azure Time Series Insights Preview environment.
30
+
31
+
Key configuration, formatting, and best practices are summarized below.
32
+
27
33
#### Event Sources
28
34
29
-
Time Series Insights Preview supports the following event sources:
35
+
Azure Time Series Insights Preview supports the following event sources:
Time Series Insights Preview supports a maximum of two event sources per instance.
40
+
Azure Time Series Insights Preview supports a maximum of two event sources per instance.
35
41
36
-
> [!WARNING]
42
+
> [!IMPORTANT]
37
43
> * You may experience high initial latency when attaching an event source to your Preview environment.
38
44
> Event source latency depends on the number of events currently in your IoT Hub or Event Hub.
39
45
> * High latency will subside after event source data is first ingested. Contact us by submitting a support ticket through the Azure portal if you experience continued high latency.
40
46
41
47
#### Supported data format and types
42
48
43
-
Azure Time Series Insights supports UTF8 encoded JSON submitted through Azure IoT Hub or Azure Event Hubs.
49
+
Azure Time Series Insights supports UTF-8 encoded JSON sent from Azure IoT Hub or Azure Event Hubs.
44
50
45
51
Below is the list of supported data types.
46
52
47
53
| Data type | Description |
48
-
|-----------|------------------|-------------|
49
-
| bool|A data type having one of two states: true or false. |
50
-
| dateTime|Represents an instant in time, typically expressed as a date and time of day. DateTimes should be in ISO 8601format.|
51
-
| double | A double-precision 64-bit IEEE 754 floating point
52
-
| string| Text values, comprised of Unicode characters. |
54
+
|---|---|
55
+
|**bool**|A data type having one of two states: `true` or `false`.|
56
+
|**dateTime**|Represents an instant in time, typically expressed as a date and time of day. Expressed in [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format.|
|**string**| Text values, comprised of Unicode characters. |
53
59
54
60
#### Objects and arrays
55
61
56
-
You can send complex types such as objects and arrays as part of your event payload, but your data will undergo a flattening process when stored. For more information on how to shape your JSON events as well as details on complex type and nested object flattening, see the page on [how to shape JSON for ingress and query](./time-series-insights-update-how-to-shape-events.md).
62
+
You can send complex types such as objects and arrays as part of your event payload, but your data will undergo a flattening process when stored.
57
63
64
+
Detailed information describing how to shape your JSON events, sending complex type, and nested object flattening is available in [How to shape JSON for ingress and query](./time-series-insights-update-how-to-shape-events.md).
58
65
59
66
### Ingress best practices
60
67
61
68
We recommend that you employ the following best practices:
62
69
63
-
* Configure Time Series Insights and your IoT Hub or Event Hub in the same region in order to reduce network incurred ingestion latency.
64
-
* Plan for your scale needs by calculating your anticipated ingestion rate and verifying that it falls within the supported rate listed below
70
+
* Configure Azure Time Series Insights and any IoT Hub or Event Hub in the same region to reduce potential latency.
71
+
72
+
*[Plan for your scale needs](time-series-insights-update-plan.md) by calculating your anticipated ingestion rate and verifying that it falls within the supported rate listed below.
73
+
65
74
* Understand how to optimize and shape your JSON data, as well as the current limitations in preview, by reading [how to shape JSON for ingress and query](./time-series-insights-update-how-to-shape-events.md).
66
75
67
-
### Ingress scale and limitations in preview
76
+
### Ingress scale and Preview limitations
77
+
78
+
Azure Time Series Insights Preview ingress limitations are described below.
68
79
69
80
#### Per environment limitations
70
81
71
82
In general, ingress rates are viewed as the factor of the number of devices that are in your organization, event emission frequency, and the size of each event:
72
83
73
84
***Number of devices** × **Event emission frequency** × **Size of each event**.
74
85
75
-
By default, Time Series Insights preview can ingest incoming data at a rate of up to 1 megabyte per second (MBps) **per TSI environment**. Contact us if this does not meet your requirements, we can support up to 16 MBps for an environment by submitting a support ticket in the Azure portal.
76
-
77
-
Example 1: Contoso Shipping has 100,000 devices that emit an event three times per minute. The size of an event is 200 bytes. They’re using an Event Hub with 4 partitions as the TSI event source.
78
-
The ingestion rate for their TSI environment would be: 100,000 devices * 200 bytes/event * (3/60 event/sec) = 1 MBps.
79
-
The ingestion rate per partition would be 0.25 MBps.
80
-
Contoso Shipping’s ingestion rate would be within the preview scale limitation.
86
+
By default, Time Series Insights preview can ingest incoming data at a rate of **up to 1 megabyte per second (MBps) per Time Series Insights environment**.
87
+
88
+
> [!TIP]
89
+
> * Environment support for ingesting speeds up to 16 MBps can be provided by request.
90
+
> * Contact us if you require higher throughput by submitting a support ticket in the Azure portal.
81
91
82
-
Example 2: Contoso Fleet Analytics has 60,000 devices that emit an event every second. They are using an IoT Hub 24 partition count of 4 as the TSI event source. The size of an event is 200 bytes.
83
-
The environment ingestion rate would be: 20,000 devices * 200 bytes/event * 1 event/sec = 4 MBps.
84
-
The per partition rate would be 1 MBps.
85
-
Contoso Fleet Analytics would need to submit a request to TSI via the Azure portal for a dedicated environment to achieve this scale.
92
+
***Example 1:**
93
+
94
+
Contoso Shipping has 100,000 devices that emit an event three times per minute. The size of an event is 200 bytes. They’re using an Event Hub with four partitions as the Time Series Insights event source.
95
+
96
+
* The ingestion rate for their Time Series Insights environment would be: 100,000 devices * 200 bytes/event * (3/60 event/sec) = 1 MBps.
97
+
* The ingestion rate per partition would be 0.25 MBps.
98
+
* Contoso Shipping’s ingestion rate would be within the preview scale limitation.
99
+
100
+
***Example 2:**
101
+
102
+
Contoso Fleet Analytics has 60,000 devices that emit an event every second. They are using an IoT Hub 24 partition count of 4 as the Time Series Insights event source. The size of an event is 200 bytes.
103
+
104
+
* The environment ingestion rate would be: 20,000 devices * 200 bytes/event * 1 event/sec = 4 MBps.
105
+
* The per partition rate would be 1 MBps.
106
+
* Contoso Fleet Analytics would need to submit a request to Time Series Insights via the Azure portal for a dedicated environment to achieve this scale.
86
107
87
108
#### Hub Partitions and Per Partition Limits
88
109
89
-
When planning your TSI environment, it's important to consider the configuration of the event source(s) that you'll be connecting to TSI. Both Azure IoT Hub and Event Hubs utilize partitions to enable horizontal scale for event processing. A partition is an ordered sequence of events that is held in a hub. The partition count is set during the IoT or Event Hubs’ creation phase, and is not changeable. For more information on determining the partition count, see the Event Hubs' FAQ How many partitions do I need? For TSI environments using IoT Hub, generally most IoT Hubs only need 4 partitions. Whether or not you're creating a new hub for your TSI environment, or using an existing one, you'll need to calculate your per partition ingestion rate to determine if it is within the preview limits. TSI preview currently has a **per partition** limit of 0.5 MB/s. Use the examples below as a reference, and please note the following IoT Hub-specific consideration if you're an IoT Hub user.
110
+
When planning your Time Series Insights environment, it's important to consider the configuration of the event source(s) that you'll be connecting to Time Series Insights. Both Azure IoT Hub and Event Hubs utilize partitions to enable horizontal scale for event processing.
111
+
112
+
A *partition* is an ordered sequence of events held in a hub. The partition count is set during the hub creation phase and cannot be changed.
113
+
114
+
For Event Hubs partitioning best practices, review [How many partitions do I need?](../event-hubs/event-hubs-faq#how-many-partitions-do-i-need).
115
+
116
+
> [!NOTE]
117
+
> Most IoT Hubs used with Azure Time Series Insights only need four partitions.
118
+
119
+
Whether you're creating a new hub for your Time Series Insights environment or using an existing one, you'll need to calculate your per partition ingestion rate to determine if it's within the preview limits.
120
+
121
+
Azure Time Series Insights Preview currently has a general **per partition limit of 0.5 MBps**.
90
122
91
123
#### IoT Hub-specific considerations
92
124
93
-
When a device is created in IoT Hub it is assigned to a partition, and the partition assignment will not change. By doing so, IoT Hub is able to guarantee event ordering. However, this has implications for TSI as a downstream reader in certain scenarios. When messages from multiple devices are forwarded to the hub using the same gateway device ID they will arrive in the same partition, thus potentially exceeding the per partition scale limitation.
125
+
When a device is created in IoT Hub, it is permanently assigned to a partition. In doing so, IoT Hub is able to guarantee event ordering (since the assignment never changes).
126
+
127
+
This has implications for Time Series Insights instances that are ingesting data sent from IoT Hub downstream.
128
+
129
+
When messages from multiple devices are forwarded to the hub using the same gateway device ID, they may arrive in the same partition at the same time potentially exceeding the per partition scale limits.
94
130
95
131
**Impact**:
96
-
If a single partition experiences a sustained rate of ingestion over the preview limitation there is the potential that the TSI reader will not ever catch up before the IoT Hub data retention period has been exceeded. This would cause a loss of data.
132
+
133
+
* If a single partition experiences a sustained rate of ingestion over the Preview limit, there is the potential that the Time Series Insights reader will not ever catch up before the IoT Hub data retention period has been exceeded. This would cause a loss of data.
97
134
98
135
We recommend the following:
99
136
100
137
* Calculate your per environment and per partition ingestion rate before deploying your solution
101
138
* Ensure that your IoT Hub devices (and thus partitions) are load-balanced to the furthest extend possible
102
139
103
-
> [!WARNING]
140
+
> [!IMPORTANT]
104
141
> For environments using IoT Hub as an event source, calculate the ingestion rate using the number of hub devices in use to be sure that the rate falls below the 0.5 MBps per partition limitation in preview.
142
+
> * Even if several events arrive simultaneously, the Preview limit will not be exceeded.
When you create a Time Series Insights Preview pay-as-you-go SKU environment, you create two Azure resources:
117
155
118
-
*A Time Series Insights Preview environment that can optionally include warm store capabilities.
156
+
*An Azure Time Series Insights Preview environment that can be configured for warm storage.
119
157
* An Azure Storage general-purpose V1 blob account for cold data storage.
120
158
121
159
Data in your warm store is available only via [Time Series Query](./time-series-insights-update-tsq.md) and the [Azure Time Series Insights Preview explorer](./time-series-insights-update-explorer.md).
@@ -127,7 +165,7 @@ Time Series Insights Preview saves your cold store data to Azure Blob storage in
127
165
128
166
### Data availability
129
167
130
-
Time Series Insights Preview partitions and indexes data for optimum query performance. Data becomes available to query after it’s indexed. The amount of data that's being ingested can affect this availability.
168
+
Azure Time Series Insights Preview partitions and indexes data for optimum query performance. Data becomes available to query after it’s indexed. The amount of data that's being ingested can affect this availability.
131
169
132
170
> [!IMPORTANT]
133
171
> During the preview, you might experience a period of up to 60 seconds before data becomes available. If you experience significant latency beyond 60 seconds, please submit a support ticket through the Azure portal.
@@ -140,13 +178,14 @@ For a thorough description of Azure Blob storage, read the [Storage blobs introd
140
178
141
179
### Your storage account
142
180
143
-
When you create a Time Series Insights Preview pay-as-you-go environment, an Azure Storage general-purpose V1 blob account is created as your long-term cold store.
181
+
When you create an Azure Time Series Insights Preview *pay-as-you-go* (PAYG) environment, an Azure Storage general-purpose V1 blob account is created as your long-term cold store.
144
182
145
-
Time Series Insights Preview publishes up to two copies of each event in your Azure Storage account. The initial copy has events ordered by ingestion time and is always preserved, so you can use other services to access it. You can use Spark, Hadoop, and other familiar tools to process the raw Parquet files.
183
+
Azure Time Series Insights Preview publishes up to two copies of each event in your Azure Storage account. The initial copy has events ordered by ingestion time. That event order is **always preserved** so other services can access your events without sequencing issues.
146
184
147
-
Time Series Insights Preview repartitions the Parquet files to optimize for the Time Series Insights query. This repartitioned copy of the data is also saved.
185
+
> [!NOTE]
186
+
> You can also use Spark, Hadoop, and other familiar tools to process the raw Parquet files.
148
187
149
-
During public preview, data is stored indefinitely in your Azure Storage account.
188
+
Time Series Insights Preview also re-partitions the Parquet files to optimize for the Time Series Insights query. This repartitioned copy of the data is also saved. During public review, data is stored indefinitely in your Azure Storage account.
150
189
151
190
#### Writing and editing Time Series Insights blobs
0 commit comments