You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/logs-data-export.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ Log Analytics workspace data export continuously exports data from a Log Analyti
34
34
- The existing custom log tables won’t be supported in export. A new custom log version available in March 2022 will be supported.
35
35
- If the data export rule includes an unsupported table, the operation will succeed, but no data will be exported for that table until the table becomes supported.
36
36
- If the data export rule includes a table that doesn't exist, it will fail with error `Table <tableName> does not exist in the workspace`.
37
-
- You can define up to 10 enabled rules in your workspace. Additional rules are allowed when disabled.
37
+
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled.
38
38
- Destination must be unique across all export rules in your workspace.
39
39
- Destinations must be in the same region as the Log Analytics workspace.
40
40
- Tables names can be no longer than 60 characters when exporting to storage account and 47 characters to event hub. Tables with longer names will not be exported.
@@ -69,7 +69,7 @@ Log Analytics workspace data export continuously exports data from a Log Analyti
69
69
- West US 2
70
70
71
71
## Data completeness
72
-
Data export will continue to retry sending data for up to 30 minutes in the event that the destination is unavailable. If it's still unavailable after 30 minutes then data will be discarded until the destination becomes available.
72
+
Data export will continue to retry sending data for up to 30 minutes if the destination is unavailable. If it's still unavailable after 30 minutes, data will be discarded until the destination becomes available.
73
73
74
74
## Cost
75
75
Currently, there are no additional charges for the data export feature. Pricing for data export will be announced in the future and a notice period provided prior to the start of billing. If you choose to continue using data export after the notice period, you will be billed at the applicable rate.
@@ -80,7 +80,7 @@ Data export destination must be created before creating the export rule in your
80
80
81
81
### Storage account
82
82
83
-
You need to have 'write' permissions to both workspace and destination to configure data export rule. You shouldn't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data and prevent reaching storage ingestion rate limit and throttling.
83
+
You need to have 'write' permissions to both workspace and destination to configure data export rule. Don't use an existing storage account that has other, non-monitoring data stored in it so that you can better control access to the data and prevent reaching storage ingestion rate limit and throttling.
84
84
85
85
To send data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Blob storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this article including enabling protected append blobs writes.
86
86
@@ -91,7 +91,7 @@ Data is sent to storage accounts as it reaches Azure Monitor and stored in hourl
91
91
> [!NOTE]
92
92
> It's recommended to use separate storage account for proper ingress rate allocation and reducing throttling, failures and latency events.
93
93
94
-
Starting 15-October 2021, blobs are stored in 5minutes folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Since append blobs are limited to 50K writes in storage, the number of exported blobs may extend if the number of appends is high. The naming pattern for blobs in such case would be PT05M_#.json*, where # is the incremental blob count.
94
+
Starting 15-October 2021, blobs are stored in 5-minutes folders in the following path structure: *WorkspaceResourceId=/subscriptions/subscription-id/resourcegroups/\<resource-group\>/providers/microsoft.operationalinsights/workspaces/\<workspace\>/y=\<four-digit numeric year\>/m=\<two-digit numeric month\>/d=\<two-digit numeric day\>/h=\<two-digit 24-hour clock hour\>/m=\<two-digit 60-minute clock minute\>/PT05M.json*. Since append blobs are limited to 50 K writes in storage, the number of exported blobs may extend if the number of appends is high. The naming pattern for blobs in such case would be PT05M_#.json*, where # is the incremental blob count.
95
95
96
96
The storage account data format is in [JSON lines](../essentials/resource-logs-blob-format.md). This means that each record is delimited by a newline, with no outer records array and no commas between JSON records.
97
97
@@ -140,38 +140,38 @@ If you have configured your Storage Account to allow access from selected networ
140
140
[](media/logs-data-export/storage-account-vnet.png#lightbox)
141
141
142
142
### Create or update data export rule
143
-
Data export rule defines the tables for which data is exported and destination. You can have 10 enabled rules in your workspace, additional rules can be added, but in 'disable' state. Destinations must be unique across all export rules in workspace.
143
+
Data export rule defines the tables for which data is exported and destination. You can have 10 enabled rules in your workspace, more rules can be added, but in 'disable' state. Destinations must be unique across all export rules in workspace.
144
144
145
-
Data export destinations have limits and they should be monitored to minimize export throttling, failures and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
145
+
Data export destinations have limits and should be monitored to minimize export throttling, failures, and latency. See [storage accounts scalability](../../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts) and [event hub namespace quota](../../event-hubs/event-hubs-quotas.md).
146
146
147
-
#### Recommendations for storage account
147
+
#### Monitoring storage account
148
148
149
149
1. Use separate storage account for export
150
-
1. Configure alert on the metric below with the following settings:
| storage-name | Account | Ingress | Sum | 80% of max storage ingress rate. For example: it's 60Gbps for general-purpose v2 in West US |
154
+
| storage-name | Account | Ingress | Sum | 80% of max ingress per alert evaluation period . For example: limit is 60 Gbps for general-purpose v2 in West US. Threshold is 14400 Gb per 5-minutes evaluation period|
155
155
156
156
1. Alert remediation actions
157
157
- Use separate storage account for export
158
158
- Azure Storage standard accounts support higher ingress limit by request. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/)
159
159
- Split tables between additional storage accounts
| namespaces-name | Event Hub standard metrics | Incoming bytes | Sum | 80% of max ingress per 5 minutes. For example, it's 1MB/s per unit (TU or PU) |
168
-
| namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per 5 minutes. For example, it's 1000/s per unit (TU or PU) |
169
-
| namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% to 5% of request|
167
+
| namespaces-name | Event Hub standard metrics | Incoming bytes | Sum | 80% of max ingress per alert evaluation period. For example, limit is 1 MB/s per unit (TU or PU) and 5 units used. Threshold is 1200 MB per 5-minutes evaluation period|
168
+
| namespaces-name | Event Hub standard metrics | Incoming requests | Count | 80% of max events per alert evaluation period. For example, limit is 1000/s per unit (TU or PU) and 5 units used. Threshold is 1200000 per 5-minutes evaluation period|
169
+
| namespaces-name | Event Hub standard metrics | Quota Exceeded Errors | Count | Between 1% of request. For example, requests per 5 minutes is 600000. Threshold is 6000 per 5-minutes evaluation period|
170
170
171
171
1. Alert remediation actions
172
172
- Configure [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) feature to automatically scale up and increase the number of throughput units to meet usage needs
173
173
- Verify increase of throughput units to acommodate the load
174
-
- Split tables between additional namespaces
174
+
- Split tables between other namespaces
175
175
- Use 'Premium' or 'Dedicated' tiers for higher throughput
176
176
177
177
Export rule should include tables that you have in your workspace. Run this query for a list of available tables in your workspace.
@@ -629,7 +629,7 @@ If the data export rule includes a table that doesn't exist, it will fail with t
629
629
630
630
631
631
## Supported tables
632
-
Supported tables are currently limited to those specified below. All data from the table will be exported unless limitations are specified. This list is updated as support for additional tables added.
632
+
Supported tables are currently limited to those specified below. All data from the table will be exported unless limitations are specified. This list is updated as more tables are added.
0 commit comments