Skip to content

Commit d85fe14

Browse files
authored
Merge pull request #89732 from ekpgh/hpc-cache-updatepreview-2
make non-blocking updates from PR 89614
2 parents 54b395b + 589a22d commit d85fe14

File tree

5 files changed

+15
-15
lines changed

5 files changed

+15
-15
lines changed

articles/hpc-cache/hpc-cache-add-storage.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ From the Azure portal, open your cache instance and click **Storage targets** on
3434

3535
## Add a new Azure Blob storage target
3636

37-
A new Blob storage target needs an empty Blob container or a container that is populated with data in the Azure HPC Cache cloud filesystem format. Read more about pre-loading a Blob container in [Move data to Azure Blob storage](hpc-cache-ingest.md).
37+
A new Blob storage target needs an empty Blob container or a container that is populated with data in the Azure HPC Cache cloud file system format. Read more about pre-loading a Blob container in [Move data to Azure Blob storage](hpc-cache-ingest.md).
3838

3939
To define an Azure Blob container, enter this information.
4040

@@ -49,7 +49,7 @@ To define an Azure Blob container, enter this information.
4949
You will need to authorize the cache instance to access the storage account as described in [Add the access roles](#add-the-access-control-roles-to-your-account).
5050
* **Storage container** - Select the Blob container for this target.
5151

52-
* **Virtual namespace path** - Set the client-facing filepath for this storage target. Read [Configure aggregated namespace](hpc-cache-namespace.md) to learn more about the virtual namespace feature.
52+
* **Virtual namespace path** - Set the client-facing file path for this storage target. Read [Configure aggregated namespace](hpc-cache-namespace.md) to learn more about the virtual namespace feature.
5353

5454
When finished, click **OK** to add the storage target.
5555

@@ -104,7 +104,7 @@ Create all of the paths from one storage target.
104104

105105
Fill in these values for each namespace path:
106106

107-
* **Virtual namespace path** - Set the client-facing filepath for this storage target. Read [Configure aggregated namespace](hpc-cache-namespace.md) to learn more about the virtual namespace feature.
107+
* **Virtual namespace path** - Set the client-facing file path for this storage target. Read [Configure aggregated namespace](hpc-cache-namespace.md) to learn more about the virtual namespace feature.
108108

109109
<!-- The virtual path should start with a slash ``/``. -->
110110

articles/hpc-cache/hpc-cache-ingest-manual.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ To learn more about moving data to Blob storage for your Azure HPC Cache, read [
1818

1919
You can manually create a multi-threaded copy on a client by running more than one copy command at once in the background against predefined sets of files or paths.
2020

21-
The Linux/UNIX ``cp`` command includes the argument ``-p`` to preserve ownership and mtime metadata. Adding this argument to the commands below is optional. (Adding the argument increases the number of filesystem calls sent from the client to the destination filesystem for metadata modification.)
21+
The Linux/UNIX ``cp`` command includes the argument ``-p`` to preserve ownership and mtime metadata. Adding this argument to the commands below is optional. (Adding the argument increases the number of file system calls sent from the client to the destination file system for metadata modification.)
2222

2323
This simple example copies two files in parallel:
2424

@@ -76,9 +76,9 @@ cp -R /mnt/source/dir1/dir1d /mnt/destination/dir1/ &
7676

7777
## When to add mount points
7878

79-
After you have enough parallel threads going against a single destination filesystem mount point, there will be a point where adding more threads does not give more throughput. (Throughput will be measured in files/second or bytes/second, depending on your type of data.) Or worse, over-threading can sometimes cause a throughput degradation.
79+
After you have enough parallel threads going against a single destination file system mount point, there will be a point where adding more threads does not give more throughput. (Throughput will be measured in files/second or bytes/second, depending on your type of data.) Or worse, over-threading can sometimes cause a throughput degradation.
8080

81-
When this happens, you can add client-side mount points to other Azure HPC Cache mount addresses, using the same remote filesystem mount path:
81+
When this happens, you can add client-side mount points to other Azure HPC Cache mount addresses, using the same remote file system mount path:
8282

8383
```bash
8484
10.1.0.100:/nfs on /mnt/sourcetype nfs (rw,vers=3,proto=tcp,addr=10.1.0.100)
@@ -131,7 +131,7 @@ Client4: cp -R /mnt/source/dir3/dir3d /mnt/destination/dir3/ &
131131

132132
## Create file manifests
133133

134-
After understanding the approaches above (multiple copy-threads per destination, multiple destinations per client, multiple clients per network-accessible source filesystem), consider this recommendation: Build file manifests and then use them with copy commands across multiple clients.
134+
After understanding the approaches above (multiple copy-threads per destination, multiple destinations per client, multiple clients per network-accessible source file system), consider this recommendation: Build file manifests and then use them with copy commands across multiple clients.
135135

136136
This scenario uses the UNIX ``find`` command to create manifests of files or directories:
137137

articles/hpc-cache/hpc-cache-ingest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ This article explains the best ways to move data to Blob storage for use with Az
1616

1717
Keep these facts in mind:
1818

19-
* Azure HPC Cache uses a specialized storage format to organize data in Blob storage. This is why a Blob storage target must either be a new, empty container, or a Blob container that was previously used for Azure HPC Cache data. ([Avere vFXT for Azure](https://azure.microsoft.com/services/storage/avere-vfxt/) also uses this cloud filesystem.)
19+
* Azure HPC Cache uses a specialized storage format to organize data in Blob storage. This is why a Blob storage target must either be a new, empty container, or a Blob container that was previously used for Azure HPC Cache data. ([Avere vFXT for Azure](https://azure.microsoft.com/services/storage/avere-vfxt/) also uses this cloud file system.)
2020

2121
* Copying data through the Azure HPC Cache to a back-end storage target is more efficient when you use multiple clients and parallel operations. A simple copy command from one client will move data slowly.
2222

articles/hpc-cache/hpc-cache-namespace.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ ms.author: v-erkell
1212

1313
Azure HPC Cache (preview) allows clients to access a variety of storage systems through a virtual namespace that hides the details of the back-end storage system.
1414

15-
When you add a storage target, you set the client-facing filepath. Client machines mount this filepath and can make file read requests to the cache instead of mounting the storage system directly.
15+
When you add a storage target, you set the client-facing file path. Client machines mount this file path and can make file read requests to the cache instead of mounting the storage system directly.
1616

17-
Because Azure HPC Cache manages this virtual filesystem, you can change the storage target without changing the client-facing path. For example, you could replace a hardware storage system with cloud storage without needing to rewrite client-facing procedures.
17+
Because Azure HPC Cache manages this virtual file system, you can change the storage target without changing the client-facing path. For example, you could replace a hardware storage system with cloud storage without needing to rewrite client-facing procedures.
1818

1919
## Aggregated namespace example
2020

@@ -37,23 +37,23 @@ The data to be analyzed has been copied to an Azure Blob storage container named
3737

3838
To allow easy access through the cache, consider creating storage targets with these virtual namespace paths:
3939

40-
| Back-end storage system <br/> (NFS filepath or Blob container) | Virtual namespace path |
40+
| Back-end storage system <br/> (NFS file path or Blob container) | Virtual namespace path |
4141
|-----------------------------------------|------------------------|
4242
| /goldline/templates/acme2017/sku798 | /templates/sku798 |
4343
| /goldline/templates/acme2017/sku980 | /templates/sku980 |
4444
| sourcecollection | /source/ |
4545

4646
An NFS storage target can have multiple virtual namespace paths, as long as each one references a unique export path.
4747

48-
Since the NFS source paths are subdirectories of the same export, you will need to define multiple namespace paths from the same storage target.
48+
Because the NFS source paths are subdirectories of the same export, you will need to define multiple namespace paths from the same storage target.
4949

5050
| Storage target hostname | NFS export path | Subdirectory path | Namespace path |
5151
|--------------------------|----------------------|-------------------|-------------------|
5252
| *IP address or hostname* | /goldline/templates | acme2017/sku798 | /templates/sku798 |
5353
| *IP address or hostname* | /goldline/templates | acme2017/sku980 | /templates/sku980 |
5454

55-
A client application can mount the cache and easily access the aggregated namespace filepaths ``/source``, ``/templates/sku798``, and ``/templates/sku980``.
55+
A client application can mount the cache and easily access the aggregated namespace file paths ``/source``, ``/templates/sku798``, and ``/templates/sku980``.
5656

5757
## Next steps
5858

59-
After you have decided how to set up your virtual filesystem, [create storage targets](hpc-cache-add-storage.md) to map your back-end storage to your client-facing virtual filepaths.
59+
After you have decided how to set up your virtual file system, [create storage targets](hpc-cache-add-storage.md) to map your back-end storage to your client-facing virtual file paths.

articles/hpc-cache/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ summary: Use Azure HPC Cache to expedite file access for read-intensive high-per
55

66
metadata:
77
title: Microsoft Azure HPC Cache
8-
description: Learn how to create and use Azure HPC Cache to solve the latency problem between your compute resources and storage. Azure HPC Cache is for file-based read-heavy workloads, and can create an aggregated virtual filesystem that incorporates Azure Blob storage and on-premises hardware filers.
8+
description: Learn how to create and use Azure HPC Cache to solve the latency problem between your compute resources and storage. Azure HPC Cache is for file-based read-heavy workloads, and can create an aggregated virtual file system that incorporates Azure Blob storage and on-premises hardware filers.
99
ms.service: hpc-cache
1010
ms.topic: landing-page
1111
ms.collection: collection

0 commit comments

Comments
 (0)