You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/avere-vfxt/avere-vfxt-data-ingest.md
+68-42Lines changed: 68 additions & 42 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,33 +4,34 @@ description: How to add data to a new storage volume for use with the Avere vFXT
4
4
author: ekpgh
5
5
ms.service: avere-vfxt
6
6
ms.topic: conceptual
7
-
ms.date: 10/31/2018
7
+
ms.date: 11/21/2019
8
8
ms.author: rohogue
9
9
---
10
10
11
-
# Moving data to the vFXT cluster - Parallel data ingest
11
+
# Moving data to the vFXT cluster - Parallel data ingest
12
12
13
13
After you've created a new vFXT cluster, your first task might be to move data onto its new storage volume. However, if your usual method of moving data is issuing a simple copy command from one client, you will likely see a slow copy performance. Single-threaded copying is not a good option for copying data to the Avere vFXT cluster's backend storage.
14
14
15
15
Because the Avere vFXT cluster is a scalable multi-client cache, the fastest and most efficient way to copy data to it is with multiple clients. This technique parallelizes ingestion of the files and objects.
16
16
17
-

17
+

18
18
19
19
The ``cp`` or ``copy`` commands that are commonly used to using to transfer data from one storage system to another are single-threaded processes that copy only one file at a time. This means that the file server is ingesting only one file at a time - which is a waste of the cluster’s resources.
20
20
21
21
This article explains strategies for creating a multi-client, multi-threaded file copying system to move data to the Avere vFXT cluster. It explains file transfer concepts and decision points that can be used for efficient data copying using multiple clients and simple copy commands.
22
22
23
-
It also explains some utilities that can help. The ``msrsync`` utility can be used to partially automate the process of dividing a dataset into buckets and using rsync commands. The ``parallelcp`` script is another utility that reads the source directory and issues copy commands automatically.
23
+
It also explains some utilities that can help. The ``msrsync`` utility can be used to partially automate the process of dividing a dataset into buckets and using ``rsync`` commands. The ``parallelcp`` script is another utility that reads the source directory and issues copy commands automatically. Also, the ``rsync`` tool can be used in two phases to provide a quicker copy that still provides data consistency.
24
24
25
25
Click the link to jump to a section:
26
26
27
27
*[Manual copy example](#manual-copy-example) - A thorough explanation using copy commands
A Resource Manager template is available on GitHub to automatically create a VM with the parallel data ingestion tools mentioned in this article.
34
+
A Resource Manager template is available on GitHub to automatically create a VM with the parallel data ingestion tools mentioned in this article.
34
35
35
36

36
37
@@ -45,7 +46,7 @@ When building a strategy to copy data in parallel, you should understand the tra
45
46
46
47
Each copy process has a throughput rate and a files-transferred rate, which can be measured by timing the length of the copy command and factoring the file size and file count. Explaining how to measure the rates is outside the scope of this document, but it is imperative to understand whether you’ll be dealing with small or large files.
47
48
48
-
## Manual copy example
49
+
## Manual copy example
49
50
50
51
You can manually create a multi-threaded copy on a client by running more than one copy command at once in the background against predefined sets of files or paths.
After issuing this command, the `jobs` command will show that two threads are running.
61
62
62
-
### Predictable filename structure
63
+
### Predictable filename structure
63
64
64
-
If your filenames are predictable, you can use expressions to create parallel copy threads.
65
+
If your filenames are predictable, you can use expressions to create parallel copy threads.
65
66
66
67
For example, if your directory contains 1000 files that are numbered sequentially from `0001` to `1000`, you can use the following expressions to create ten parallel threads that each copy 100 files:
After you have enough parallel threads going against a single destination filesystem mount point, there will be a point where adding more threads does not give more throughput. (Throughput will be measured in files/second or bytes/second, depending on your type of data.) Or worse, over-threading can sometimes cause a throughput degradation.
111
+
After you have enough parallel threads going against a single destination filesystem mount point, there will be a point where adding more threads does not give more throughput. (Throughput will be measured in files/second or bytes/second, depending on your type of data.) Or worse, over-threading can sometimes cause a throughput degradation.
111
112
112
113
When this happens, you can add client-side mount points to other vFXT cluster IP addresses, using the same remote filesystem mount path:
113
114
@@ -118,7 +119,7 @@ When this happens, you can add client-side mount points to other vFXT cluster IP
118
119
10.1.1.103:/nfs on /mnt/destination3type nfs (rw,vers=3,proto=tcp,addr=10.1.1.103)
119
120
```
120
121
121
-
Adding client-side mount points lets you fork off additional copy commands to the additional `/mnt/destination[1-3]` mount points, achieving further parallelism.
122
+
Adding client-side mount points lets you fork off additional copy commands to the additional `/mnt/destination[1-3]` mount points, achieving further parallelism.
122
123
123
124
For example, if your files are very large, you might define the copy commands to use distinct destination paths, sending out more commands in parallel from the client performing the copy.
124
125
@@ -138,7 +139,7 @@ In the example above, all three destination mount points are being targeted by t
138
139
139
140
### When to add clients
140
141
141
-
Lastly, when you have reached the client's capabilities, adding more copy threads or additional mount points will not yield any additional files/sec or bytes/sec increases. In that situation, you can deploy another client with the same set of mount points that will be running its own sets of file copy processes.
142
+
Lastly, when you have reached the client's capabilities, adding more copy threads or additional mount points will not yield any additional files/sec or bytes/sec increases. In that situation, you can deploy another client with the same set of mount points that will be running its own sets of file copy processes.
142
143
143
144
Example:
144
145
@@ -184,7 +185,7 @@ Redirect this result to a file: `find . -mindepth 4 -maxdepth 4 -type d > /tmp/f
184
185
Then you can iterate through the manifest, using BASH commands to count files and determine the sizes of the subdirectories:
You will get *N* resulting files, one for each of your *N* clients that has the path names to the level-four directories obtained as part of the output from the `find` command.
244
+
You will get *N* resulting files, one for each of your *N* clients that has the path names to the level-four directories obtained as part of the output from the `find` command.
The above will give you *N* files, each with a copy command per line, that can be run as a BASH script on the client.
252
+
The above will give you *N* files, each with a copy command per line, that can be run as a BASH script on the client.
252
253
253
254
The goal is to run multiple threads of these scripts concurrently per client in parallel on multiple clients.
254
255
255
-
## Use the msrsync utility to populate cloud volumes
256
+
## Use a two-phase rsync process
256
257
257
-
The ``msrsync`` tool also can be used to move data to a backend core filer for the Avere cluster. This tool is designed to optimize bandwidth usage by running multiple parallel ``rsync`` processes. It is available from GitHub at https://github.com/jbd/msrsync.
258
+
The standard ``rsync`` utility does not work well for populating cloud storage through the Avere vFXT for Azure system because it generates a large number of file create and rename operations to guarantee data integrity. However, you can safely use the ``--inplace`` option with ``rsync`` to skip the more careful copying procedure if you follow that with a second run that checks file integrity.
259
+
260
+
A standard ``rsync`` copy operation creates a temporary file and fills it with data. If the data transfer completes successfully, the temporary file is renamed to the original filename. This method guarantees consistency even if the files are accessed during copy. But this method generates more write operations, which slows file movement through the cache.
261
+
262
+
The option ``--inplace`` writes the new file directly in its final location. Files are not guaranteed to be consistent during transfer, but that is not important if you are priming a storage system for use later.
263
+
264
+
The second ``rsync`` operation serves as a consistency check on the first operation. Because the files have already been copied, the second phase is a quick scan to ensure that the files on the destination match the files on the source. If any files don't match, they are recopied.
265
+
266
+
You can issue both phases together in one command:
This method is a simple and time-effective method for datasets up to the number of files the internal directory manager can handle. (This is typically 200 million files for a 3-node cluster, 500 million files for a six-node cluster, and so on.)
273
+
274
+
## Use the msrsync utility
275
+
276
+
The ``msrsync`` tool also can be used to move data to a backend core filer for the Avere cluster. This tool is designed to optimize bandwidth usage by running multiple parallel ``rsync`` processes. It is available from GitHub at <https://github.com/jbd/msrsync>.
258
277
259
278
``msrsync`` breaks up the source directory into separate “buckets” and then runs individual ``rsync`` processes on each bucket.
260
279
261
280
Preliminary testing using a four-core VM showed best efficiency when using 64 processes. Use the ``msrsync`` option ``-p`` to set the number of processes to 64.
262
281
263
-
Note that ``msrsync``can only write to and from local volumes. The source and destination must be accessible as local mounts in the cluster’s virtual network.
282
+
You also can use the ``--inplace``argument with ``msrsync`` commands. If you use this option, consider running a second command (as with [rsync](#use-a-two-phase-rsync-process), described above) to ensure data integrity.
264
283
265
-
To use msrsync to populate an Azure cloud volume with an Avere cluster, follow these instructions:
284
+
``msrsync`` can only write to and from local volumes. The source and destination must be accessible as local mounts in the cluster’s virtual network.
266
285
267
-
1. Install msrsync and its prerequisites (rsync and Python 2.6 or later)
286
+
To use ``msrsync`` to populate an Azure cloud volume with an Avere cluster, follow these instructions:
287
+
288
+
1. Install ``msrsync`` and its prerequisites (rsync and Python 2.6 or later)
268
289
1. Determine the total number of files and directories to be copied.
269
290
270
-
For example, use the Avere utility ``prime.py`` with arguments ```prime.py --directory /path/to/some/directory``` (available by downloading url https://github.com/Azure/Avere/blob/master/src/clientapps/dataingestor/prime.py).
291
+
For example, use the Avere utility ``prime.py`` with arguments ```prime.py --directory /path/to/some/directory``` (available by downloading url <https://github.com/Azure/Avere/blob/master/src/clientapps/dataingestor/prime.py>).
271
292
272
-
If not using ``prime.py``, you can calculate the number of items with the Gnu``find`` tool as follows:
293
+
If not using ``prime.py``, you can calculate the number of items with the GNU``find`` tool as follows:
273
294
274
295
```bash
275
296
find <path> -type f |wc -l # (counts files)
@@ -279,39 +300,45 @@ To use msrsync to populate an Azure cloud volume with an Avere cluster, follow t
279
300
280
301
1. Divide the number of items by 64 to determine the number of items per process. Use this number with the ``-f`` option to set the size of the buckets when you run the command.
The ``parallelcp`` script also can be useful for moving data to your vFXT cluster's backend storage.
321
+
The ``parallelcp`` script also can be useful for moving data to your vFXT cluster's backend storage.
295
322
296
323
The script below will add the executable `parallelcp`. (This script is designed for Ubuntu; if using another distribution, you must install ``parallel`` separately.)
0 commit comments