Skip to content

Commit 297e831

Browse files
authored
Merge pull request #100991 from jithubhaijabs/master
Update file sync planning with common sync scenarios and solutions
2 parents 8583ee0 + e736123 commit 297e831

File tree

2 files changed

+11
-1
lines changed

2 files changed

+11
-1
lines changed

articles/storage/file-sync/file-sync-planning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,7 @@ Azure File Sync does not support Data Deduplication and cloud tiering on the sam
242242
### Distributed File System (DFS)
243243
Azure File Sync supports interop with DFS Namespaces (DFS-N) and DFS Replication (DFS-R).
244244

245-
**DFS Namespaces (DFS-N)**: Azure File Sync is fully supported on DFS-N servers. You can install the Azure File Sync agent on one or more DFS-N members to sync data between the server endpoints and the cloud endpoint. For more information, see [DFS Namespaces overview](/windows-server/storage/dfs-namespaces/dfs-overview).
245+
**DFS Namespaces (DFS-N)**: Azure File Sync is fully supported with DFS-N implementation. You can install the Azure File Sync agent on one or more file servers to sync data between the server endpoints and the cloud endpoint, and then use DFS-N to provide namespace service. For more information, see [DFS Namespaces overview](/windows-server/storage/dfs-namespaces/dfs-overview) and [DFS Namespaces with Azure Files](../files/files-manage-namespaces.md).
246246

247247
**DFS Replication (DFS-R)**: Since DFS-R and Azure File Sync are both replication solutions, in most cases, we recommend replacing DFS-R with Azure File Sync. There are however several scenarios where you would want to use DFS-R and Azure File Sync together:
248248

includes/storage-files-migration-namespace-mapping.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,16 @@ It's a best practice to keep the number of items per sync scope low. That's an i
6060

6161
It's possible that, in your situation, a set of folders can logically sync to the same Azure file share (by using the new common root folder approach mentioned earlier). But it might still be better to regroup folders so they sync to two instead of one Azure file share. You can use this approach to keep the number of files and folders per file share balanced across the server. You can also split your on-premises shares and sync across more on-premises servers, adding the ability to sync with 30 more Azure file shares per extra server.
6262

63+
#### Common file sync scenarios and considerations
64+
65+
| # | Sync scenario | Supported | Considerations (or limitations) | Solution (or workaround) |
66+
|---|---|:---:|---|---|
67+
| 1 | File server with multiple disks/volumes and multiple shares to same target Azure file share (consolidation) | No | A target Azure file share (cloud endpoint) only supports syncing with one sync group. <br/> <br/> A sync group only supports one server endpoint per registered server. | 1) Start with syncing one disk (its root volume) to target Azure file share. Starting with largest disk/volume will help with storage requirements on-premises. Configure cloud tiering to tier all data to cloud, thereby freeing up space on the file server disk. Move data from other volumes/shares into the current volume which is syncing. Continue the steps one by one until all data is tiered up to cloud/migrated.<br/> 2) Target one root volume (disk) at a time. Use cloud tiering to tier all data to target Azure file share. Remove server endpoint from sync group, re-create the endpoint with the next root volume/disk, sync, and repeat the process. Note: Agent re-install might be required.<br/> 3) Recommend using multiple target Azure file shares (same or different storage account based on performance requirements) |
68+
| 2 | File server with single volume and multiple shares to same target Azure file share (consolidation) | Yes | Can't have multiple server endpoints per registered server syncing to same target Azure file share (same as above) | Sync root of the volume holding multiple shares or top-level folders. Refer to [Share grouping concept](#share-grouping) and [Volume sync](#volume-sync) for more information. |
69+
| 3 | File server with multiple shares and/or volumes to multiple Azure file shares under single storage account (1:1 share mapping) | Yes | A single Windows Server instance (or cluster) can sync up to 30 Azure file shares.<br/><br/> A storage account is a scale target for performance. IOPS and throughput get shared across file shares.<br/><br/> Keep number of items per sync group within 100 million items (files and folders) per share. Ideally it's best to stay below 20 or 30 million per share. | 1) Use multiple sync groups (number of sync groups = number of Azure file shares to sync to).<br/> 2) Only 30 shares can be synced in this scenario at a time. If you have more than 30 shares on that file server, use [Share grouping concept](#share-grouping) and [Volume sync](#volume-sync) to reduce the number of root or top-level folders at source.<br/> 3) Use additional File Sync servers on-premises and split/move data to these servers to work around limitations on the source Windows server. |
70+
| 4 | File server with multiple shares and/or volumes to multiple Azure file shares under different storage account (1:1 share mapping) | Yes | A single Windows Server instance (or cluster) can sync up to 30 Azure file shares (same or different storage account).<br/><br/> Keep number of items per sync group within 100 million items (files and folders) per share. Ideally it's best to stay below 20 or 30 million per share. | Same approach as above |
71+
| 5 | Multiple file servers with single (root volume or share) to same target Azure file share (consolidation) | No | A sync group can't use cloud endpoint (Azure file share) already configured in another sync group.<br/><br/> Although a sync group can have server endpoints on different file servers, the files can't be distinct. | *Follow guidance in Scenario # 1 above with additional consideration of targeting one file server at a time.* |
72+
6373
#### Create a mapping table
6474

6575
:::row:::

0 commit comments

Comments
 (0)