You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn where to save input files, and where to write output files from your experiments to prevent storage limit errors and experiment latency.
20
19
21
-
When launching training jobs on a [compute target](concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
20
+
When launching training jobs on a [compute target](../concept-compute-target.md), they are isolated from outside environments. The purpose of this design is to ensure reproducibility and portability of the experiment. If you run the same script twice, on the same or another compute target, you receive the same results. With this design, you can treat compute targets as stateless computation resources, each having no affinity to the jobs that are running after they are finished.
22
21
23
22
## Where to save input files
24
23
@@ -30,7 +29,7 @@ The storage limit for experiment snapshots is 300 MB and/or 2000 files.
30
29
31
30
For this reason, we recommend:
32
31
33
-
***Storing your files in an Azure Machine Learning [dataset](/python/api/azureml-core/azureml.data).** This prevents experiment latency issues, and has the advantages of accessing data from a remote compute target, which means authentication and mounting are managed by Azure Machine Learning. Learn more about how to specify a dataset as your input data source in your training script with [Train with datasets](v1/how-to-train-with-datasets.md).
32
+
***Storing your files in an Azure Machine Learning [dataset](/python/api/azureml-core/azureml.data).** This prevents experiment latency issues, and has the advantages of accessing data from a remote compute target, which means authentication and mounting are managed by Azure Machine Learning. Learn more about how to specify a dataset as your input data source in your training script with [Train with datasets](how-to-train-with-datasets.md).
34
33
35
34
***If you only need a couple data files and dependency scripts and can't use a datastore,** place the files in the same folder directory as your training script. Specify this folder as your `source_directory` directly in your training script, or in the code that calls your training script.
36
35
@@ -50,15 +49,15 @@ To resolve this error, store your experiment files on a datastore. If you can't
Less than 2000 files & can't use a datastore| Override snapshot size limit with <br> `azureml._restclient.snapshots_client.SNAPSHOT_MAX_SIZE_BYTES = 'insert_desired_size'`<br> This may take several minutes depending on the number and size of files.
53
-
Must use specific script directory| [!INCLUDE [amlinclude-info](../../includes/machine-learning-amlignore-gitignore.md)]
52
+
Must use specific script directory| [!INCLUDE [amlinclude-info](../../../includes/machine-learning-amlignore-gitignore.md)]
54
53
Pipeline|Use a different subdirectory for each step
55
54
Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new, empty, subdirectory and run your code again.
56
55
57
56
## Where to write files
58
57
59
58
Due to the isolation of training experiments, the changes to files that happen during jobs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment job, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment job don't and shouldn't affect those in the second.
60
59
61
-
When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](v1/how-to-train-with-datasets.md#where-to-write-training-output).
60
+
When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](how-to-train-with-datasets.md#where-to-write-training-output).
62
61
63
62
Otherwise, write files to the `./outputs` and/or `./logs` folder.
64
63
@@ -73,4 +72,4 @@ Otherwise, write files to the `./outputs` and/or `./logs` folder.
73
72
74
73
* Learn more about [accessing data from storage](how-to-access-data.md).
75
74
76
-
* Learn more about [Create compute targets for model training and deployment](how-to-create-attach-compute-studio.md)
75
+
* Learn more about [Create compute targets for model training and deployment](../how-to-create-attach-compute-studio.md)
0 commit comments