You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/logs/logs-data-export.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,7 +39,7 @@ Log Analytics workspace data export continuously exports data that is sent to yo
39
39
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled.
40
40
- Destinations must be in the same region as the Log Analytics workspace.
41
41
- Storage Account must be unique across rules in workspace.
42
-
- Tables names can be no longer than 60 characters when exporting to Storage Account and 47 characters to Event Hubs. Tables with longer names will not be exported.
42
+
- Tables names can be 60 characters long when exporting to Storage Account, and 47 characters to Event Hubs. Tables with longer names won't be exported.
43
43
- Data export isn't supported in China currently.
44
44
45
45
## Data completeness
@@ -151,8 +151,7 @@ If you have configured your Storage Account to allow access from selected networ
151
151
Data export rule defines the destination and tables for which data is exported. You can create 10 rules in 'enable' state in your workspace, more rules are allowed in 'disable' state. Storage Account must be unique across rules in workspace. Multiple rules can use the same Event Hubs namespace when sending to separate Event Hubs.
152
152
153
153
> [!NOTE]
154
-
> - You can include tables that aren't yet supported in export, and no data will be exported for these until the tables are supported.
155
-
> - The legacy custom log won’t be supported in export. The next generation of custom log available in preview early 2022 can be exported.
154
+
> - You can include tables that aren't yet supported in rules, but no data will be exported for these until tables get supported.
156
155
> - Export to Storage Account - a separate container is created in Storage Account for each table.
157
156
> - Export to Event Hubs - if Event Hubs name isn't provided, a separate Event Hubs is created for each table. The [number of supported Event Hubs in 'Basic' and 'Standard' namespaces tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces, or provide an Event Hubs name in the rule to export all tables to it.
Copy file name to clipboardExpand all lines: articles/data-factory/connector-azure-blob-storage.md
+5-9Lines changed: 5 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.service: data-factory
8
8
ms.subservice: data-movement
9
9
ms.topic: conceptual
10
10
ms.custom: synapse
11
-
ms.date: 09/01/2022
11
+
ms.date: 10/23/2022
12
12
---
13
13
14
14
# Copy and transform data in Azure Blob Storage by using Azure Data Factory or Azure Synapse Analytics
@@ -684,7 +684,7 @@ In this case, all files that were sourced under `/data/sales` are moved to `/bac
684
684
685
685
**Filter by last modified:** You can filter which files you process by specifying a date range of when they were last modified. All datetimes are in UTC.
686
686
687
-
**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see [Change data capture](#change-data-capture-preview).
687
+
**Enable change data capture:** If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs.
688
688
689
689
:::image type="content" source="media/data-flow/enable-change-data-capture.png" alt-text="Screenshot showing Enable change data capture.":::
690
690
@@ -847,15 +847,11 @@ To learn details about the properties, check [Delete activity](delete-activity.m
847
847
]
848
848
```
849
849
850
-
## Change data capture (preview)
850
+
## Change data capture
851
851
852
-
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture (Preview)** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice.
852
+
Azure Data Factory can get new or changed files only from Azure Blob Storage by enabling **Enable change data capture ** in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice. Pleaser refer to [Change Data Capture](https://learn.microsoft.com/azure/data-factory/concepts-change-data-capture) for detials.
853
853
854
-
Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can always be recorded from the last run to get changes from there. If you change your pipeline name or activity name, the checkpoint will be reset, and you will start from the beginning in the next run.
855
-
856
-
When you debug the pipeline, the **Enable change data capture (Preview)** works as well. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the result from debug run, you can publish and trigger the pipeline. It will always start from the beginning regardless of the previous checkpoint recorded by debug run.
857
-
858
-
In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/protect-network-resources.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Protecting your network resources in Microsoft Defender for Cloud
3
3
description: This document addresses recommendations in Microsoft Defender for Cloud that help you protect your Azure network resources and stay in compliance with security policies.
4
4
ms.topic: conceptual
5
-
ms.date: 11/09/2021
5
+
ms.date: 10/23/2022
6
6
---
7
7
# Protect your network resources
8
8
@@ -35,7 +35,7 @@ To open the Network map:
35
35
36
36
1. Select **Network map**.
37
37
38
-
:::image type="content" source="./media/protect-network-resources/opening-network-map.png" alt-text="Opening the network map from the Workload protections." lightbox="./media/protect-network-resources/opening-network-map.png":::
38
+
:::image type="content" source="media/protect-network-resources/workload-protection-network-map.png" alt-text="Screenshot showing selection of network map from workload protections." lightbox="media/protect-network-resources/workload-protection-network-map.png":::
39
39
40
40
1. Select the **Layers** menu choose **Topology**.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/quickstart-onboard-gcp.md
-6Lines changed: 0 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -100,12 +100,6 @@ To locate the unique numeric ID in the GCP portal, navigate to **IAM & Admin** >
100
100
101
101
1. (Optional) If you changed any of the names of any of the resources, update the names in the appropriate fields.
102
102
103
-
1. (**Servers/SQL only**) Select **Azure-Arc for servers onboarding**
104
-
105
-
:::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen." lightbox="media/quickstart-onboard-gcp/unique-numeric-id.png":::
106
-
107
-
Enter the service account unique ID, which is generated automatically after running the GCP Cloud Shell.
Copy file name to clipboardExpand all lines: articles/defender-for-cloud/upcoming-changes.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
title: Important changes coming to Microsoft Defender for Cloud
3
3
description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan
4
4
ms.topic: overview
5
-
ms.date: 10/20/2022
5
+
ms.date: 10/23/2022
6
6
---
7
7
8
8
# Important upcoming changes to Microsoft Defender for Cloud
@@ -18,7 +18,13 @@ If you're looking for the latest release notes, you'll find them in the [What's
18
18
19
19
| Planned change | Estimated date for change |
20
20
|--|--|
21
-
| None | None |
21
+
|[Deprecation of AWS Lambda recommendation](#deprecation-of-aws-lambda-recommendation)| November 2023 |
22
+
23
+
### Deprecation of AWS Lambda recommendation
24
+
25
+
**Estimated date for change: November 2023**
26
+
27
+
The following recommendation is set to be deprecated [`Lambda functions should have a dead-letter queue configured`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/AwsRecommendationDetailsBlade/assessmentKey/dcf10b98-798f-4734-9afd-800916bf1e65/showSecurityCenterCommandBar~/false).
0 commit comments