You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/storage/blobs/immutable-container-level-worm-policies.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Append blobs are composed of blocks of data and optimized for data append operat
50
50
51
51
The **allowProtectedAppendWrites** property setting allows for writing new blocks to an append blob while maintaining immutability protection and compliance. If this setting is enabled, you can create an append blob directly in the policy-protected container and then continue to add new blocks of data to the end of the append blob with the Append Block operation. Only new blocks can be added; any existing blocks can't be modified or deleted. Enabling this setting doesn't affect the immutability behavior of block blobs or page blobs.
52
52
53
-
The **AllowProtectedAppendWritesAll** property setting provides the same permissions as the **allowProtectedAppendWrites** property and adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
53
+
The **AllowProtectedAppendWritesAll** property setting provides the same permissions as the **allowProtectedAppendWrites** property and adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
54
54
55
55
Append blobs remain in the immutable state during the effective retention period. Since new data can be appended beyond the initial creation of the append blob, there's a slight difference in how the retention period is determined. The effective retention is the difference between append blob's last modification time and the user-specified retention interval. Similarly, when the retention interval is extended, immutable storage uses the most recent value of the user-specified retention interval to calculate the effective retention period.
Copy file name to clipboardExpand all lines: articles/storage/blobs/immutable-policy-configure-container-scope.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ To configure a time-based retention policy on a container with the Azure portal,
45
45
46
46
The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
47
47
48
-
The **Block and append blobs** option provides you with the same permissions as the **Append blobs** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
48
+
The **Block and append blobs** option provides you with the same permissions as the **Append blobs** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
49
49
50
50
To learn more about these options, see [Allow protected append blobs writes](immutable-container-level-worm-policies.md#allow-protected-append-blobs-writes).
51
51
@@ -70,7 +70,7 @@ To allow protected append writes, set the `-AllowProtectedAppendWrite` or `-All
70
70
71
71
The **AllowProtectedAppendWrite** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
72
72
73
-
The **AllowProtectedAppendWriteAll** option provides you with the same permissions as the **AllowProtectedAppendWrite** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
73
+
The **AllowProtectedAppendWriteAll** option provides you with the same permissions as the **AllowProtectedAppendWrite** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
74
74
75
75
To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
76
76
@@ -90,7 +90,7 @@ To allow protected append writes, set the `--allow-protected-append-writes` or
90
90
91
91
The **--allow-protected-append-writes** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
92
92
93
-
The **--allow-protected-append-writes-all** option provides you with the same permissions as the **--allow-protected-append-writes** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
93
+
The **--allow-protected-append-writes-all** option provides you with the same permissions as the **--allow-protected-append-writes** option but adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, some Microsoft applications use internal APIs to create block blobs and then append to them. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append blocks to a block blob.
94
94
95
95
To learn more about these options, see [Allow protected append blobs writes](immutable-time-based-retention-policy-overview.md#allow-protected-append-blobs-writes).
96
96
@@ -240,7 +240,7 @@ To configure a legal hold on a container with the Azure portal, follow these ste
240
240
241
241
The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
242
242
243
-
This setting also adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
243
+
This setting also adds the ability to write new blocks to a block blob. The Blob Storage API doesn't provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
244
244
245
245
To learn more about these options, see [Allow protected append blobs writes](immutable-legal-hold-overview.md#allow-protected-append-blobs-writes).
Copy file name to clipboardExpand all lines: articles/storage/blobs/immutable-policy-configure-version-scope.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -287,7 +287,7 @@ To configure a default version-level immutability policy for a container in the
287
287
288
288
The **Append blobs** option enables your workloads to add new blocks of data to the end of an append blob by using the [Append Block](/rest/api/storageservices/append-block) operation.
289
289
290
-
The **Block and append blobs** option extends this support by adding the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage Gen2 API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
290
+
The **Block and append blobs** option extends this support by adding the ability to write new blocks to a block blob. The Blob Storage API does not provide a way for applications to do this directly. However, applications can accomplish this by using append and flush methods that are available in the Data Lake Storage API. Also, this property enables Microsoft applications such as Azure Data Factory to append blocks of data by using internal APIs. If your workloads depend on any of these tools, then you can use this property to avoid errors that can appear when those tools attempt to append data to blobs.
291
291
292
292
To learn more about these options, see [Allow protected append blobs writes](immutable-container-level-worm-policies.md#allow-protected-append-blobs-writes).
Copy file name to clipboardExpand all lines: articles/storage/blobs/immutable-storage-overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -118,7 +118,7 @@ The following table shows a breakdown of the differences between container-level
118
118
| Feature dependencies | No other features are a prerequisite or requirement for this feature to function. | Versioning is a prerequisite for this feature to be used. |
119
119
| Enablement for existing accounts/container | This feature can be enabled at any time for existing containers. | Depending on the level of granularity, this feature might not be enabled for all existing accounts/containers. |
120
120
| Account/container deletion | Once a time-based retention policy is locked on a container, containers may only be deleted if they're empty. | Once version-level WORM is enabled on an account or container level, they may only be deleted if they're empty.|
121
-
| Support for Azure Data Lake Storage Gen2 (storage accounts that have a hierarchical namespace enabled)| Container-level WORM policies are supported in accounts that have a hierarchical namespace. | Version-level WORM policies are not yet supported in accounts that have a hierarchical namespace. |
121
+
| Support for Azure Data Lake Storage (storage accounts that have a hierarchical namespace enabled)| Container-level WORM policies are supported in accounts that have a hierarchical namespace. | Version-level WORM policies are not yet supported in accounts that have a hierarchical namespace. |
122
122
123
123
To learn more about container-level WORM, see [Container-Level WORM policies](immutable-container-level-worm-policies.md). To learn more about version-level WORM, please visit [version-Level WORM policies](immutable-version-level-worm-policies.md).
124
124
@@ -131,7 +131,7 @@ The following table helps you decide which type of WORM policy to use.
131
131
| Organization of data | You want to set policies for specific data sets, which can be categorized by container. All the data in that container needs to be kept in a WORM state for the same amount of time. | You can't group objects by retention periods. All blobs must be stored with an individual retention time based on that blob’s scenarios, or user has a mixed workload so that some groups of data can be clustered into containers while other blobs can't. You might also want to set container-level policies and blob-level policies within the same account. |
132
132
| Amount of data that requires an immutable policy | You don't need to set policies on more than 10,000 containers per account. | You want to set policies on all data or large amounts of data that can be delineated by account. You know that if you use container-level WORM, you'll have to exceed the 10,000-container limit. |
133
133
| Interest in enabling versioning | You don't want to deal with enabling versioning either because of the cost, or because the workload would create numerous extra versions to deal with. | You either want to use versioning, or don't mind using it. You know that if they don’t enable versioning, you can't keep edits or overwrites to immutable blobs as separate versions. |
134
-
| Storage location (Blob Storage vs Data Lake Storage Gen2) | Your workload is entirely focused on Azure Data Lake Storage Gen2. You have no immediate interest or plan to switch to using an account that doesn't have the hierarchical namespace feature enabled. | Your workload is either on Blob Storage in an account that doesn't have the hierarchical namespace feature enabled, and can use version-level WORM now, or you're willing to wait for versioning to be available for accounts that do have a hierarchical namespace enabled (Azure Data Lake Storage Gen2).|
134
+
| Storage location (Blob Storage vs Data Lake Storage) | Your workload is entirely focused on Azure Data Lake Storage. You have no immediate interest or plan to switch to using an account that doesn't have the hierarchical namespace feature enabled. | Your workload is either on Blob Storage in an account that doesn't have the hierarchical namespace feature enabled, and can use version-level WORM now, or you're willing to wait for versioning to be available for accounts that do have a hierarchical namespace enabled (Azure Data Lake Storage).|
Copy file name to clipboardExpand all lines: articles/storage/blobs/migrate-gen2-wandisco-live-data-platform.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Data Lake Storage and WANdisco LiveData Platform for Azure
3
3
titleSuffix: Azure Storage
4
-
description: Learn how to migrate petabytes of on-premises Hadoop data to Azure Data Lake Storage Gen2 file systems without interrupting data operations or requiring downtime.
4
+
description: Learn how to migrate petabytes of on-premises Hadoop data to Azure Data Lake Storage file systems without interrupting data operations or requiring downtime.
# Migrate on-premises Hadoop data to Azure Data Lake Storage Gen2 with WANdisco LiveData Platform for Azure
15
+
# Migrate on-premises Hadoop data to Azure Data Lake Storage with WANdisco LiveData Platform for Azure
16
16
17
-
[WANdisco LiveData Platform for Azure](https://docs.wandisco.com/live-data-platform/docs/landing/) migrates petabytes of on-premises Hadoop data to Azure Data Lake Storage Gen2 file systems without interrupting data operations or requiring downtime. The platform's continuous checks prevent data from being lost while keeping it consistent at both ends of transference even while it undergoes modification.
17
+
[WANdisco LiveData Platform for Azure](https://docs.wandisco.com/live-data-platform/docs/landing/) migrates petabytes of on-premises Hadoop data to Azure Data Lake Storage file systems without interrupting data operations or requiring downtime. The platform's continuous checks prevent data from being lost while keeping it consistent at both ends of transference even while it undergoes modification.
18
18
19
19
The platform consists of two services. [LiveData Migrator for Azure](https://cirata.com/products/data-integration) migrates actively used data from on-premises environments to Azure storage, and [LiveData Plane for Azure](https://cirata.com/products/data-integration) ensures that all modified or ingested data is replicated consistently.
20
20
@@ -93,7 +93,7 @@ To perform a migration:
93
93
94
94
3. Configure Kerberos details, if applicable.
95
95
96
-
4. Define the target Azure Data Lake Storage Gen2-enabled storage account.
96
+
4. Define the target Azure Data Lake Storage-enabled storage account.
97
97
98
98
> [!div class="mx-imgBorder"]
99
99
> 
For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Blob Storage, see [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md#resource-logs).
33
33
> [!NOTE]
34
-
> Data Lake Storage Gen2 doesn't appear as a storage type because Data Lake Storage Gen2 is a set of capabilities available to Blob storage.
34
+
> Data Lake Storage doesn't appear as a storage type because Data Lake Storage is a set of capabilities available to Blob storage.
Copy file name to clipboardExpand all lines: articles/storage/blobs/network-file-system-protocol-known-issues.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ ms.author: normesta
15
15
This article describes limitations and known issues of Network File System (NFS) 3.0 protocol support for Azure Blob Storage.
16
16
17
17
> [!IMPORTANT]
18
-
> Because you must enable the hierarchical namespace feature of your account to use NFS 3.0, all of the known issues that are described in the [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account.
18
+
> Because you must enable the hierarchical namespace feature of your account to use NFS 3.0, all of the known issues that are described in the [Known issues with Azure Data Lake Storage](data-lake-storage-known-issues.md) article also apply to your account.
0 commit comments