|
1 | 1 | ---
|
2 | 2 | title: Import data into Azure Modeling and Simulation Workbench
|
3 | 3 | description: Learn how to import data into a chamber in Azure Modeling and Simulation Workbench.
|
4 |
| -author: lynnar |
5 |
| -ms.author: lynnar |
6 |
| -ms.reviewer: yochu |
| 4 | +author: becha8 |
| 5 | +ms.author: becha |
| 6 | +ms.reviewer: becha |
7 | 7 | ms.service: modeling-simulation-workbench
|
8 | 8 | ms.topic: how-to
|
9 |
| -ms.date: 01/01/2023 |
| 9 | +ms.date: 08/05/2024 |
10 | 10 | # Customer intent: As a Chamber User in Azure Modeling and Simulation Workbench, I want to import data into my chamber.
|
11 | 11 | ---
|
12 | 12 |
|
@@ -40,16 +40,20 @@ Open your web browser and go to the [Azure portal](https://portal.azure.com/). E
|
40 | 40 | 1. Use the AzCopy command to upload your file. For example, use `azcopy copy <sourceFilePath> "<uploadURL>"`.
|
41 | 41 |
|
42 | 42 | > [!NOTE]
|
43 |
| - > Supported characters for the file name are alphanumeric characters, underscores, periods, and hyphens. |
| 43 | + > Supported characters for the file name are alphanumeric characters, underscores, periods, and hyphens. Make sure that the file name does not have any spaces between characters, as it will cause the data import to fail. |
44 | 44 | >
|
45 |
| - > The data pipeline processes only files at the root. It doesn't process subfolders. |
| 45 | + > The data pipeline processes only files at the root. It doesn't process subfolders. If you're importing multiple smaller files, we recommend that you zip or tarball them into a single file. |
| 46 | + > |
| 47 | + > Gigabyte-sized tarballs and zipped files are supported, up to a maximum of 200GB per file. Ensure that each individual file is less than the maximum allowed size. |
46 | 48 |
|
47 | 49 | 1. Confirm that the uploaded file resource with the source file name appears under **Chamber** > **Data Pipeline** > **File**.
|
48 | 50 |
|
49 | 51 | A Chamber Admin or Chamber User can access the uploaded file from the chamber by accessing the following path: */mount/datapipeline/datain*.
|
50 | 52 |
|
51 | 53 | > [!IMPORTANT]
|
52 | 54 | > If you're importing multiple smaller files, we recommend that you zip or tarball them into a single file. Gigabyte-sized tarballs and zipped files are supported, depending on your connection type and network speed.
|
| 55 | +> The /mount/datapipeline/datain directory has a file size of 1TB, so if the imported dataset is larger than this, then free up space by moving the files over to /mount/chamberstorages/”Workbench chamber storage” |
| 56 | +> Note that the /datapipeline directory is Azure Files based, whereas the /chamberstorages directory is high-performance Azure NetApp Files. Always copy over the tools/binaries/IP from the /datapipeline/datain folder /chamberstorages directory under the specific chamber’s private storage. |
53 | 57 |
|
54 | 58 | ## Next steps
|
55 | 59 |
|
|
0 commit comments