Skip to content

Commit 2d45fbf

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-stack-docs-pr into azure-arc-vm-management
2 parents 294763b + 1e5c225 commit 2d45fbf

40 files changed

+533
-647
lines changed

azure-local/assurance/azure-stack-fedramp-guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: FedRAMP guidance for Azure Local
33
description: Learn about FedRAMP compliance using Azure Local.
44
ms.date: 12/27/2024
55
ms.topic: conceptual
6-
ms.service: azure-stack-hci
6+
ms.service: azure-local
77
ms.author: nguyenhung
88
author: dv00000
99
ms.reviewer: alkohli

azure-local/assurance/azure-stack-hipaa-guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: HIPAA guidance for Azure Local
33
description: Learn about HIPAA compliance using Azure Local.
44
ms.date: 12/27/2024
55
ms.topic: conceptual
6-
ms.service: azure-stack-hci
6+
ms.service: azure-local
77
ms.author: nguyenhung
88
author: dv00000
99
ms.reviewer: alkohli

azure-local/assurance/azure-stack-iso27001-guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: ISO 27001 guidance for Azure Local
33
description: Learn about ISO 27001 compliance using Azure Local.
44
ms.date: 12/27/2024
55
ms.topic: conceptual
6-
ms.service: azure-stack-hci
6+
ms.service: azure-local
77
ms.author: nguyenhung
88
author: dv00000
99
ms.reviewer: alkohli

azure-local/assurance/azure-stack-pci-dss-guidance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: PCI DSS guidance for Azure Local
33
description: Learn about PCI DSS compliance using Azure Local.
44
ms.date: 12/27/2024
55
ms.topic: conceptual
6-
ms.service: azure-stack-hci
6+
ms.service: azure-local
77
ms.author: nguyenhung
88
author: dv00000
99
ms.reviewer: alkohli

azure-local/assurance/azure-stack-security-standards.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure Local and security standards
33
description: Learn about Azure Local, security standards, and security assurance.
44
ms.date: 12/27/2024
55
ms.topic: conceptual
6-
ms.service: azure-stack-hci
6+
ms.service: azure-local
77
ms.author: nguyenhung
88
author: dv00000
99
ms.reviewer: alkohli

azure-local/manage/refs-deduplication-and-compression.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how to use ReFS deduplication and compression in Azure Local
44
author: alkohli
55
ms.author: alkohli
66
ms.topic: how-to
7-
ms.date: 12/10/2024
7+
ms.date: 01/16/2025
88
---
99

1010
# Optimize storage with ReFS deduplication and compression in Azure Local
@@ -15,7 +15,7 @@ This article describes the Resilient File System (ReFS) deduplication and compre
1515

1616
## What is ReFS deduplication and compression?
1717

18-
ReFS deduplication and compression is a storage optimization feature designed specifically for active workloads, such as [Azure virtual desktop infrastructure (VDI) on Azure Local](../deploy/virtual-desktop-infrastructure.md). This feature helps optimize storage usage and reduce storage cost.
18+
ReFS deduplication and compression is a storage optimization feature that helps optimize storage usage and reduce storage cost. Use deduplication specifically for active, performance-sensitive, or read-heavy workloads, such as [Azure virtual desktop infrastructure (VDI) on Azure Local](../deploy/virtual-desktop-infrastructure.md). For less performance-intensive workloads, you can use a combination of deduplication and compression or only compression.
1919

2020
This feature uses [ReFS block cloning](/windows-server/storage/refs/block-cloning) to reduce data movement and enable metadata only operations. The feature operates at the data block level and uses fixed block size depending on the system size. The compression engine generates a heatmap to identify if a block should be eligible for compression, optimizing for CPU usage.
2121

@@ -26,7 +26,7 @@ You can run ReFS deduplication and compression as a one-time job or automate it
2626
Here are the benefits of using ReFS deduplication and compression:
2727

2828
- **Storage savings for active workloads.** Designed for active workloads, such as VDI, ensuring efficient performance in demanding environments.
29-
- **Multiple modes.** Operates in three modes: deduplication only, compression only, and deduplication and compression (default mode), allowing optimization based on your needs.
29+
- **Multiple modes.** Operates in three modes: deduplication only (default mode), compression only, and deduplication and compression, allowing optimization based on your needs.
3030
- **Incremental deduplication.** Deduplicates only new or changed data as opposed to scanning the entire volume every time, optimizing job duration and reducing impact on system performance.
3131

3232
## Prerequisites
@@ -41,6 +41,9 @@ Before you begin, make sure that the following prerequisites are completed:
4141

4242
You can use ReFS deduplication and compression via Windows Admin Center or PowerShell. PowerShell allows both manual and automated jobs, whereas Windows Admin Center supports only scheduled jobs. Regardless of the method, you can customize job settings and utilize file change tracking for quicker subsequent runs.
4343

44+
> [!NOTE]
45+
> We recommend using only deduplication for workloads where performance is a consideration, rather than using compression or a combination of both.
46+
4447
### Enable and run ReFS deduplication and compression
4548

4649
# [Windows Admin Center](#tab/windowsadmincenter)
@@ -97,8 +100,8 @@ Follow these steps to enable ReFS deduplication and compression via PowerShell:
97100
98101
where:
99102
`Type` is a required parameter and can take one of the following values:
100-
- **Dedup**: Enables deduplication only.
101-
- **DedupAndCompress**: Enables both deduplication and compression. This is the default option.
103+
- **Dedup**: Enables deduplication only. This is the default option.
104+
- **DedupAndCompress**: Enables both deduplication and compression.
102105
- **Compress**: Enables compression only.
103106
104107
If you want to change the `Type` parameter, you must first [disable ReFS deduplication and compression](#disable-refs-deduplication-and-compression-on-a-volume) and then enable it again with the new `Type` parameter.

azure-managed-lustre/amlfs-overview.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Use Azure Managed Lustre to quickly create an Azure-based Lustre fi
44
ms.topic: overview
55
author: pauljewellmsft
66
ms.author: pauljewell
7-
ms.date: 11/11/2024
7+
ms.date: 01/17/2025
88
ms.reviewer: mayabishop
99
ms.custom: references_regions
1010

@@ -29,7 +29,7 @@ You can also use your Azure Managed Lustre file system with your Azure Kubernete
2929

3030
All data stored in Azure is encrypted at rest using Azure managed keys by default. If you want to manage the keys used to encrypt the data stored in your Azure Managed Lustre cluster, follow the instructions in [Server-side encryption of Azure disk storage](/azure/virtual-machines/disk-encryption).
3131

32-
All information in an Azure Managed Lustre file system also is protected by VM host encryption on the managed disks that hold your data, even if you add a customer-managed key for the Lustre disks. Adding a customer-managed key gives an extra level of security for customers with high security needs. For more information, see [Server-side encryption of Azure disk storage](/azure/virtual-machines/disk-encryption).
32+
All information in an Azure Managed Lustre file system is protected by virtual machine (VM) host encryption on the managed disks that hold your data, even if you add a customer-managed key for the Lustre disks. Adding a customer-managed key gives an extra level of security for customers with high security needs. For more information, see [Server-side encryption of Azure disk storage](/azure/virtual-machines/disk-encryption).
3333

3434
> [!NOTE]
3535
> Azure Managed Lustre doesn't store customer data outside the region in which you deploy the service instance.
@@ -58,9 +58,9 @@ If you want to use an Azure Managed Lustre storage system with your Kubernetes c
5858

5959
Kubernetes can simplify configuring and deploying virtual client endpoints for your Azure Managed Lustre workload, automating setup tasks such as:
6060

61-
* Creating Azure Virtual Machine Scale Sets used by Azure Kubernetes Service (AKS) to run the pods.
62-
* Loading the correct Lustre client software on VM instances.
63-
* Specifying the Azure Managed Lustre mount point, and propagating that information to the client pods.
61+
- Creating Azure Virtual Machine Scale Sets used by Azure Kubernetes Service (AKS) to run the pods.
62+
- Loading the correct Lustre client software on VM instances.
63+
- Specifying the Azure Managed Lustre mount point, and propagating that information to the client pods.
6464

6565
The Azure Lustre CSI driver for Kubernetes can automate installing the client software and mounting drives. The driver provides a CSI controller plugin as a deployment with two replicas by default, and a CSI node plugin, as a DaemonSet. You can change the number of replicas.
6666

azure-managed-lustre/amlfs-prerequisites.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Prerequisites for Azure Managed Lustre file systems
33
description: Learn about network and storage prerequisites to complete before you create an Azure Managed Lustre file system.
44
ms.topic: overview
5-
ms.date: 05/14/2024
5+
ms.date: 01/17/2025
66
author: pauljewellmsft
77
ms.author: pauljewell
88
ms.reviewer: mayabishop
@@ -56,7 +56,7 @@ By default, no specific changes need to be made to enable Azure Managed Lustre.
5656
| DNS access | Use the default Azure-based DNS server. |
5757
| Access between hosts on the Azure Managed Lustre subnet | Allow inbound and outbound access between hosts within the Azure Managed Lustre subnet. As an example, access to TCP port 22 (SSH) is necessary for cluster deployment. |
5858
| Azure cloud service access | Configure your network security group to permit the Azure Managed Lustre file system to access Azure cloud services from within the Azure Managed Lustre subnet.<br><br>Add an outbound security rule with the following properties:<br>- **Port**: Any<br>- **Protocol**: Any<br>- **Source**: Virtual Network<br>- **Destination**: "AzureCloud" service tag<br>- **Action**: Allow<br><br>Note: Configuring the Azure cloud service also enables the necessary configuration of the Azure Queue service.<br><br>For more information, see [Virtual network service tags](/azure/virtual-network/service-tags-overview). |
59-
| Lustre access<br>(TCP ports 988, 1019-1023) | Your network security group must allow inbound and outbound traffic for TCP port 988 and TCP port range 1019-1023. These rules need to be allowed between hosts on the Azure Managed Lustre subnet, as well as between any client subnets and the Azure Managed Lustre subnet. No other services can reserve or use these ports on your Lustre clients. The default rules `65000 AllowVnetInBound` and `65000 AllowVnetOutBound` meet this requirement. |
59+
| Lustre access<br>(TCP ports 988, 1019-1023) | Your network security group must allow inbound and outbound traffic for TCP port 988 and TCP port range 1019-1023. These rules need to be allowed between hosts on the Azure Managed Lustre subnet, and between any client subnets and the Azure Managed Lustre subnet. No other services can reserve or use these ports on your Lustre clients. The default rules `65000 AllowVnetInBound` and `65000 AllowVnetOutBound` meet this requirement. |
6060

6161

6262
For detailed guidance about configuring a network security group for Azure Managed Lustre file systems, see [Create and configure a network security group](configure-network-security-group.md#create-and-configure-a-network-security-group).
@@ -138,7 +138,7 @@ You must have two separate blob containers in the same storage account, which ar
138138
- **Logging container**: A second container for import/export logs in the storage account. You must store the logs in a different container from the data container.
139139

140140
> [!NOTE]
141-
> You can add files to the file system later from clients. However, files added to the original blob container after you create the file system won't be imported to the Azure Managed Lustre file system unless you [create an import job](create-import-job.md).
141+
> You can add files to the file system later from clients. However, files added to the original blob container after you create the file system aren't imported to the Azure Managed Lustre file system unless you [create an import job](create-import-job.md).
142142
143143
### Private endpoints (optional)
144144

azure-managed-lustre/blob-integration.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ When you import data from a blob container to an Azure Managed Lustre file syste
2626

2727
You can prefetch the contents of blobs using Lustre's `lfs hsm_restore` command from a mounted client with sudo capabilities. To learn more, see [Restore data from Blob Storage](#restore-data-from-blob-storage).
2828

29-
Azure Managed Lustre works with storage accounts that have hierarchical namespace enabled and storage accounts with a non-hierarchical, or flat, namespace. The following minor differences apply:
29+
Azure Managed Lustre works with storage accounts that have hierarchical namespace enabled and storage accounts with a nonhierarchical, or flat, namespace. The following minor differences apply:
3030

3131
- For a storage account with hierarchical namespace enabled, Azure Managed Lustre reads POSIX attributes from the blob header.
3232
- For a storage account that *does not* have hierarchical namespace enabled, Azure Managed Lustre reads POSIX attributes from the blob metadata. A separate, empty file with the same name as your blob container contents is created to hold the metadata. This file is a sibling to the actual data directory in the Azure Managed Lustre file system.
@@ -50,9 +50,9 @@ For an import job, you can specify import prefixes when you create the job. From
5050
Keep the following considerations in mind when specifying import prefixes:
5151

5252
- The default import prefix is `/`. This default behavior imports the contents of the entire blob container.
53-
- If you specify multiple prefixes, the prefixes must be non-overlapping. For example, if you specify `/data` and `/data2`, the import job fails because the prefixes overlap.
53+
- If you specify multiple prefixes, the prefixes must not overlap. For example, if you specify `/data` and `/data2`, the import job fails because the prefixes overlap.
5454
- If the blob container is in a storage account with hierarchical namespace enabled, you can think of the prefix as a file path. Items under the path are included in the Azure Managed Lustre file system.
55-
- If the blob container is in a storage account with a non-hierarchical (or flat) namespace, you can think of the import prefix as a search string that is compared with the beginning of the blob name. If the name of a blob in the container starts with the string you specified as the import prefix, that file is made accessible in the file system. Lustre is a hierarchical file system, and `/` characters in blob names become directory delimiters when stored in Lustre.
55+
- If the blob container is in a storage account with a nonhierarchical (or flat) namespace, you can think of the import prefix as a search string that is compared with the beginning of the blob name. If the name of a blob in the container starts with the string you specified as the import prefix, that file is made accessible in the file system. Lustre is a hierarchical file system, and `/` characters in blob names become directory delimiters when stored in Lustre.
5656

5757
### Conflict resolution mode
5858

@@ -81,7 +81,7 @@ When importing data from a blob container, you can specify the error tolerance.
8181

8282
The following error tolerance options are available for import jobs:
8383

84-
- **Do not allow errors** (default): The import job fails immediately if any error occurs during the import. This is the default behavior.
84+
- **Do not allow errors** (default): The import job fails immediately if any error occurs during the import. This behavior is the default.
8585
- **Allow errors**: The import job continues if an error occurs, and the error is logged. After the import job completes, you can view errors in the logging container.
8686

8787
### Considerations for blob import jobs
@@ -106,7 +106,7 @@ nohup find local/directory -type f -print0 | xargs -0 -n 1 sudo lfs hsm_restore
106106

107107
This command tells the metadata server to asynchronously process a restoration request. The command line doesn't wait for the restore to complete, which means that the command line has the potential to queue up a large number of entries for restore at the metadata server. This approach can overwhelm the metadata server and degrade performance for restores.
108108

109-
To avoid this potential performance issue, you can create a basic script that attempts to walk the path and issues restore requests in batches of a specified size. To achieve reasonable performance and avoid overwhelming the metadata server, it's recommended to use batch sizes anywhere from 1,000 to 5,000 requests.
109+
To avoid this potential performance issue, you can create a basic script that attempts to walk the path and issues restore requests in batches of a specified size. To achieve reasonable performance and avoid overwhelming the metadata server, we recommend using batch sizes anywhere from 1,000 to 5,000 requests.
110110

111111
### Example: Create a batch restore script
112112

@@ -202,7 +202,7 @@ When you export files from your Azure Managed Lustre system, not all files are c
202202

203203
In active file systems, changes to files during the export job can result in failure status. This failure status lets you know that not all data in the file system could be exported to Blob Storage. In this situation, you can retry the export by [creating a new export job](export-with-archive-jobs.md#create-an-export-job). The new job copies only the files that weren't copied in the previous job.
204204

205-
In file systems with a lot of activity, retries might fail multiple times because files are frequently changing. To verify that a file has been successfully exported to Blob Storage, check the timestamp on the corresponding blob. After the job completes, you can also view the logging container configured at deployment time to see detailed information about the export job. The logging container provides diagnostic information about which files failed, and why they failed.
205+
In file systems with a lot of activity, retries might fail multiple times because files are frequently changing. To verify that a file was successfully exported to Blob Storage, check the timestamp on the corresponding blob. After the job completes, you can also view the logging container configured at deployment time to see detailed information about the export job. The logging container provides diagnostic information about which files failed, and why they failed.
206206

207207
If you're preparing to decommission a cluster and want to perform a final export to Blob Storage, you should make sure that all I/O activities are halted before initiating the export job. This approach helps to guarantee that all data is exported by avoiding errors due to file system activity.
208208

azure-managed-lustre/client-install.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.topic: how-to
55
author: pauljewellmsft
66
ms.author: pauljewell
77
ms.reviewer: dsundarraj
8-
ms.date: 10/18/2024
8+
ms.date: 01/10/2025
99
zone_pivot_groups: select-os
1010

1111
---

0 commit comments

Comments
 (0)