Skip to content

Commit f553259

Browse files
Merge pull request #294288 from xinlaoda/xinlaoda-patch-1
freshness review
2 parents 0d7fc06 + df48ab3 commit f553259

File tree

3 files changed

+20
-20
lines changed

3 files changed

+20
-20
lines changed

articles/batch/batch-rendering-architectures.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Azure rendering reference architectures
33
description: Architectures for using Azure Batch and other Azure services to extend an on-premises render farm by bursting to the cloud
4-
ms.date: 02/07/2019
4+
ms.date: 02/07/2025
55
ms.topic: how-to
66
---
77

articles/batch/batch-rendering-storage-data-movement.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn about the various storage and data movement options for rende
44
services: batch
55
ms.service: azure-batch
66
ms.custom: linux-related-content
7-
ms.date: 08/02/2018
7+
ms.date: 02/07/2025
88
ms.topic: how-to
99
---
1010

@@ -13,20 +13,20 @@ ms.topic: how-to
1313
There are multiple options for making the scene and asset files available to the rendering applications on the pool VMs:
1414

1515
* [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md):
16-
* Scene and asset files are uploaded to blob storage from a local file system. When the application is run by a task, then the required files are copied from blob storage onto the VM so they can be accessed by the rendering application. The output files are written by the rendering application to the VM disk and then copied to blob storage. If necessary, the output files can be downloaded from blob storage to a local file system.
17-
* Azure Blob Storage is a simple and cost-effective option for smaller projects. As all asset files are required on each pool VM, then once the number and size of asset files increases care needs to be taken to ensure the file transfers are as efficient as possible.
16+
* Scene and asset files are uploaded to blob storage from a local file system. When the application is run by a task, then the required files are copied from blob storage onto the VM so they can be accessed by the rendering application. The output files are written by the rendering application to the VM disk and then copied to blob storage. If necessary, the output files can be downloaded from blob storage to a local file system.
17+
* Azure Blob Storage is a simple and cost-effective option for smaller projects. As all asset files are required on each pool VM, then once the number and size of asset files increases care needs to be taken to ensure the file transfers are as efficient as possible.
1818
* Azure storage as a file system using [blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md):
1919
* For Linux VMs, a storage account can be exposed and used as a file system when the blobfuse virtual file system driver is used.
20-
* This option has the advantage that it is very cost-effective, as no VMs are required for the file system, plus blobfuse caching on the VMs avoids repeated downloads of the same files for multiple jobs and tasks. Data movement is also simple as the files are simply blobs and standard APIs and tools, such as azcopy, can be used to copy file between an on-premises file system and Azure storage.
20+
* This option has the advantage that it is cost-effective, as no VMs are required for the file system, plus blobfuse caching on the VMs avoids repeated downloads of the same files for multiple jobs and tasks. Data movement is also simple as the files are simply blobs and standard APIs and tools, such as azcopy, can be used to copy file between an on-premises file system and Azure storage.
2121
* File system or file share:
2222
* Depending on VM operating system and performance/scale requirements, then options include [Azure Files](../storage/files/storage-files-introduction.md), using a VM with attached disks for NFS, using multiple VMs with attached disks for a distributed file system like GlusterFS, or using a third-party offering.
23-
* Avere Systems is now part of Microsoft and will have solutions in the near future that are ideal for large-scale, high-performance rendering. The Avere solution will enable an Azure-based NFS or SMB cache to be created that works in conjunction with blob storage or with on-premises NAS devices.
23+
* Avere Systems is now part of Microsoft and will have solutions soon that are ideal for large-scale, high-performance rendering. The Avere solution enable an Azure-based NFS or SMB cache to be created that works with blob storage or with on-premises NAS devices.
2424
* With a file system, files can be read or written directly to the file system or can be copied between file system and the pool VMs.
2525
* A shared file system allows a large number of assets shared between projects and jobs to be utilized, with rendering tasks only accessing what is required.
2626

2727
## Using Azure Blob Storage
2828

29-
A blob storage account or a general-purpose v2 storage account should be used. These two storage account types can be configured with significantly higher limits compared to a general-purpose v1 storage account, as detailed in [this blog post](https://azure.microsoft.com/blog/announcing-larger-higher-scale-storage-accounts/). When configured, the higher limits will enable much better performance and scalability, especially when there are many pool VMs accessing the storage account.
29+
A blob storage account or a general-purpose v2 storage account should be used. These two storage account types can be configured with higher limits compared to a general-purpose v1 storage account, as detailed in [this blog post](https://azure.microsoft.com/blog/announcing-larger-higher-scale-storage-accounts/). When configured, the higher limits enable better performance and scalability, especially when there are many pool VMs accessing the storage account.
3030

3131
### Copying files between client and blob storage
3232

@@ -46,26 +46,26 @@ To copy only modified files, the /XO parameter can be used:
4646
There are a couple of different approaches to copy files with the best approach determined by the size of the job assets.
4747
The simplest approach is to copy all the asset files to the pool VMs for each job:
4848

49-
* When there are files unique to a job, but are required for all the tasks of a job, then a [job preparation task](/rest/api/batchservice/job/add#jobpreparationtask) can be specified to copy all the files. The job preparation task is run once when the first job task is executed on a VM but is not run again for subsequent job tasks.
50-
* A [job release task](/rest/api/batchservice/job/add#jobreleasetask) should be specified to remove the per-job files once the job has completed; this will avoid the VM disk getting filled by all the job asset files.
51-
* When there are multiple jobs using the same assets, with only incremental changes to the assets for each job, then all asset files are still copied, even if only a subset were updated. This would be inefficient when there are lots of large asset files.
49+
* When there are files unique to a job, but are required for all the tasks of a job, then a [job preparation task](/rest/api/batchservice/job/add#jobpreparationtask) can be specified to copy all the files. The job preparation task is run once when the first job task is executed on a VM but is not run again for subsequent job tasks.
50+
* When a [job release task](/rest/api/batchservice/job/add#jobreleasetask) required to be specified to remove the per-job files once the job has completed; this will avoid the VM disk getting filled by all the job asset files.
51+
* When there are multiple jobs using the same assets, with only incremental changes to the assets for each job, then all asset files are still copied, even if only a subset were updated. This would be inefficient when there are lots of large asset files.
5252

5353
When asset files are reused between jobs, with only incremental changes between jobs, then a more efficient but slightly more involved approach is to store assets in the shared folder on the VM and sync changed files.
5454

55-
* The job preparation task would perform the copy using azcopy with the /XO parameter to the VM shared folder specified by AZ_BATCH_NODE_SHARED_DIR environment variable. This will only copy changed files to each VM.
56-
* Thought will have to be given to the size of all assets to ensure they will fit on the temporary drive of the pool VMs.
55+
* The job preparation task would perform the copy using azcopy with the /XO parameter to the VM shared folder specified by AZ_BATCH_NODE_SHARED_DIR environment variable. This will only copy changed files to each VM.
56+
* Thought will have to be given to the size of all assets to ensure they'll fit on the temporary drive of the pool VMs.
5757

58-
Azure Batch has built-in support to copy files between a storage account and Batch pool VMs. Task [resource files](/rest/api/batchservice/job/add#resourcefile) copy files from storage to pool VMs and could be specified for the job preparation task. Unfortunately, when there are hundreds of files it is possible to hit a limit and tasks to fail. When there are large numbers of assets it is recommended to use the azcopy command line in the job preparation task, which can use wildcards and has no limit.
58+
Azure Batch has built-in support to copy files between a storage account and Batch pool VMs. Task [resource files](/rest/api/batchservice/job/add#resourcefile) copy files from storage to pool VMs and could be specified for the job preparation task. Unfortunately, when there are hundreds of files it's possible to hit a limit and tasks to fail. When there are large numbers of assets it's recommended to use the azcopy command line in the job preparation task, which can use wildcards and has no limit.
5959

6060
### Copying output files to blob storage from Batch pool VMs
6161

62-
[Output files](/rest/api/batchservice/task/add#outputfile) can be used copy files from a pool VM to storage. One or more files can be copied from the VM to a specified storage account once the task has completed. The rendered output should be copied, but it also may be desirable to store log files.
62+
[Output files](/rest/api/batchservice/task/add#outputfile) can be used copy files from a pool VM to storage. One or more files can be copied from the VM to a specified storage account once the task has completed. The rendered output should be copied, but it also may be desirable to store log files.
6363

6464
## Using a blobfuse virtual file system for Linux VM pools
6565

6666
Blobfuse is a virtual file system driver for Azure Blob Storage, which allows you to access files stored as blobs in a Storage account through the Linux file system.
6767

68-
Pool nodes can mount the file system when started or the mount can happen as part of a job preparation task – a task that is only run when the first task in a job runs on a node. Blobfuse can be configured to leverage both a ramdisk and the VMs local SSD for caching of files, which will increase performance significantly if multiple tasks on a node access some of the same files.
68+
Pool nodes can mount the file system when started or the mount can happen as part of a job preparation task – a task that is only run when the first task in a job runs on a node. Blobfuse can be configured to leverage both a ramdisk and the VMs local SSD for caching of files, which will increase performance significantly if multiple tasks on a node access some of the same files.
6969

7070
[Sample templates are available](https://github.com/Azure/BatchExplorer-data/tree/master/ncj/vray/render-linux-with-blobfuse-mount) to run standalone V-Ray renders using a blobfuse file system and can be used as the basis for templates for other applications.
7171

@@ -79,13 +79,13 @@ As files are simply blobs in Azure Storage, then standard blob APIs, tools, and
7979

8080
## Using Azure Files with Windows VMs
8181

82-
[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the SMB protocol. Azure Files is based on Azure Blob Storage; it is [cost-efficient](https://azure.microsoft.com/pricing/details/storage/files/) and can be configured with data replication to another region so globally redundant. [Scale targets](../storage/files/storage-files-scale-targets.md#azure-files-scale-targets) should be reviewed to determine if Azure Files should be used given the forecast pool size and number of asset files.
82+
[Azure Files](../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the SMB protocol. Azure Files is based on Azure Blob Storage; it's [cost-efficient](https://azure.microsoft.com/pricing/details/storage/files/) and can be configured with data replication to another region so globally redundant. [Scale targets](../storage/files/storage-files-scale-targets.md#azure-files-scale-targets) should be reviewed to determine if Azure Files should be used given the forecast pool size and number of asset files.
8383

84-
There is [documentation](../storage/files/storage-how-to-use-files-windows.md) covering how to mount an Azure File share.
84+
There's [documentation](../storage/files/storage-how-to-use-files-windows.md) covering how to mount an Azure File share.
8585

8686
### Mounting an Azure Files share
8787

88-
To use in Batch, a mount operation needs to be performed each time a task in run as it is not possible to persist the connection between tasks. The easiest way to do this is to use cmdkey to persist credentials using the start task in the pool configuration, then mount the share before each task.
88+
To use in Batch, a mount operation needs to be performed each time a task in run as it isn't possible to persist the connection between tasks. The easiest way to do this is to use cmdkey to persist credentials using the start task in the pool configuration, then mount the share before each task.
8989

9090
Example use of cmdkey in a pool template (escaped for use in JSON file) – note that when separating the cmdkey call from the net use call, the user context for the start task must be the same as that used for running the tasks:
9191

@@ -124,7 +124,7 @@ Azure Files are supported by all the main APIs and tools that have Azure Storage
124124

125125
## Next steps
126126

127-
For more information about the storage options see the in-depth documentation:
127+
For more information about the storage options, see the in-depth documentation:
128128

129129
* [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md)
130130
* [Blobfuse](../storage/blobs/storage-how-to-mount-container-linux.md)

articles/batch/resource-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Creating and using resource files
33
description: Learn how to create Batch resource files from various input sources. This article covers a few common methods on how to create and place them on a VM.
4-
ms.date: 08/18/2021
4+
ms.date: 02/07/2025
55
ms.topic: how-to
66
---
77

0 commit comments

Comments
 (0)