You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebook-examples) to find the full end-to-end working examples.
20
+
In this article, you learn how to configure and submit Azure Machine Learning jobs to train your models. Snippets of code explain the key parts of configuration and submission of a training script. Then use one of the [example notebooks](#notebook-examples) to find the full end-to-end working examples.
21
21
22
22
When training, it is common to start on your local computer, and then later scale out to a cloud-based cluster. With Azure Machine Learning, you can run your script on various compute targets without having to change your training script.
23
23
24
-
All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
24
+
All you need to do is define the environment for each compute target within a **script job configuration**. Then, when you want to run your training experiment on a different compute target, specify the job configuration for that compute.
25
25
26
26
## Prerequisites
27
27
28
28
* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
29
-
* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install) (>= 1.13.0)
29
+
* The [Azure Machine Learning SDK for Python (v1)](/python/api/overview/azure/ml/install) (>= 1.13.0)
30
30
* An [Azure Machine Learning workspace](../how-to-manage-workspace.md), `ws`
31
-
* A compute target, `my_compute_target`. [Create a compute target](../how-to-create-attach-compute-studio.md)
31
+
* A compute target, `my_compute_target`. [Create a compute target](../how-to-create-attach-compute-studio.md)
32
32
33
33
## What's a script run configuration?
34
+
34
35
A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) is used to configure the information necessary for submitting a training job as part of an experiment.
35
36
36
-
You submit your training experiment with a ScriptRunConfig object. This object includes the:
37
+
You submit your training experiment with a ScriptRunConfig object. This object includes the:
37
38
38
39
***source_directory**: The source directory that contains your training script
39
40
***script**: The training script to run
@@ -46,7 +47,7 @@ You submit your training experiment with a ScriptRunConfig object. This object
46
47
The code pattern to submit a training job is the same for all types of compute targets:
47
48
48
49
1. Create an experiment to run
49
-
1. Create an environment where the script will run
50
+
1. Create an environment where the script runs
50
51
1. Create a ScriptRunConfig, which specifies the compute target and environment
51
52
1. Submit the job
52
53
1. Wait for the job to complete
@@ -60,6 +61,8 @@ Or you can:
60
61
61
62
Create an [experiment](concept-azure-machine-learning-architecture.md#experiments) in your workspace. An experiment is a light-weight container that helps to organize job submissions and keep track of code.
@@ -80,10 +83,12 @@ The example code in this article assumes that you have already created a compute
80
83
## Create an environment
81
84
Azure Machine Learning [environments](../concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
82
85
83
-
You can either define your own environment, or use an Azure Machine Learning curated environment. [Curated environments](../how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images which reduce the job preparation cost. See [Azure Machine Learning Curated Environments](../resource-curated-environments.md) for the full list of available curated environments.
86
+
You can either define your own environment, or use an Azure Machine Learning curated environment. [Curated environments](../how-to-use-environments.md#use-a-curated-environment) are predefined environments that are available in your workspace by default. These environments are backed by cached Docker images, which reduce the job preparation cost. See [Azure Machine Learning Curated Environments](../resource-curated-environments.md) for the full list of available curated environments.
84
87
85
88
For a remote compute target, you can use one of these popular curated environments to start with:
@@ -95,7 +100,9 @@ For more information and details about environments, see [Create & use software
95
100
96
101
### Local compute target
97
102
98
-
If your compute target is your **local machine**, you are responsible for ensuring that all the necessary packages are available in the Python environment where the script runs. Use `python.user_managed_dependencies` to use your current Python environment (or the Python on the path you specify).
103
+
If your compute target is your **local machine**, you're responsible for ensuring that all the necessary packages are available in the Python environment where the script runs. Use `python.user_managed_dependencies` to use your current Python environment (or the Python on the path you specify).
Now that you have a compute target (`my_compute_target`, see [Prerequisites](#prerequisites) and environment (`myenv`, see [Create an environment](#create-an-environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
119
+
Now that you have a compute target (`my_compute_target`, see [Prerequisites,](#prerequisites) and environment (`myenv`, see [Create an environment](#create-an-environment)), create a script job configuration that runs your training script (`train.py`) located in your `project_folder` directory:
If you do not specify an environment, a default environment will be created for you.
133
+
If you don't specify an environment, a default environment will be created for you.
128
134
129
-
If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, e.g.`arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
135
+
If you have command-line arguments you want to pass to your training script, you can specify them via the **`arguments`** parameter of the ScriptRunConfig constructor, for example,`arguments=['--arg1', arg1_val, '--arg2', arg2_val]`.
130
136
131
-
If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system will attempt to automatically cancel the job if it takes longer than this value.
137
+
If you want to override the default maximum time allowed for the job, you can do so via the **`max_run_duration_seconds`** parameter. The system attempts to automatically cancel the job if it takes longer than this value.
132
138
133
139
### Specify a distributed job configuration
140
+
134
141
If you want to run a [distributed training](../how-to-train-distributed-gpu.md) job, provide the distributed job-specific config to the **`distributed_job_config`** parameter. Supported config types include [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration), and [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration).
135
142
136
-
For more information and examples on running distributed Horovod, TensorFlow and PyTorch jobs, see:
143
+
For more information and examples on running distributed Horovod, TensorFlow, and PyTorch jobs, see:
137
144
138
145
*[Distributed training of deep learning models on Azure](/azure/architecture/reference-architectures/ai/training-deep-learning)
> When you submit the training job, a snapshot of the directory that contains your training scripts is created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
157
+
> When you submit the training job, a snapshot of the directory that contains your training scripts will be created and sent to the compute target. It is also stored as part of the experiment in your workspace. If you change files and submit the job again, only the changed files will be uploaded.
> To create artifacts during training (such as model files, checkpoints, data files, or plotted images) write these to the `./outputs` folder.
159
168
>
160
-
> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
169
+
> Similarly, you can write any logs from your training job to the `./logs` folder. To utilize Azure Machine Learning's [TensorBoard integration](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/tensorboard/export-run-history-to-tensorboard/export-run-history-to-tensorboard.ipynb) make sure you write your TensorBoard logs to this folder. While your job is in progress, you will be able to launch TensorBoard and stream these logs. Later, you will also be able to restore the logs from any of your previous jobs.
161
170
>
162
171
> For example, to download a file written to the *outputs* folder to your local machine after your remote training job:
@@ -177,25 +186,25 @@ See these notebooks for examples of configuring jobs for various training scenar
177
186
178
187
## Troubleshooting
179
188
180
-
***AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, please uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
189
+
***AttributeError: 'RoundTripLoader' object has no attribute 'comment_handling'**: This error comes from the new version (v0.17.5) of `ruamel-yaml`, an `azureml-core` dependency, that introduces a breaking change to `azureml-core`. In order to fix this error, uninstall `ruamel-yaml` by running `pip uninstall ruamel-yaml` and installing a different version of `ruamel-yaml`; the supported versions are v0.15.35 to v0.17.4 (inclusive). You can do this by running `pip install "ruamel-yaml>=0.15.35,<0.17.5"`.
181
190
182
191
183
192
***Job fails with `jwt.exceptions.DecodeError`**: Exact error message: `jwt.exceptions.DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode()`.
184
193
185
194
Consider upgrading to the latest version of azureml-core: `pip install -U azureml-core`.
186
195
187
-
If you are running into this issue for local jobs, check the version of PyJWT installed in your environment where you are starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall and install the right version as follows:
196
+
If . you're running into this issue for local jobs, check the version of PyJWT installed in your environment where . you're starting jobs. The supported versions of PyJWT are < 2.0.0. Uninstall PyJWT from the environment if the version is >= 2.0.0. You may check the version of PyJWT, uninstall, and install the right version as follows:
188
197
1. Start a command shell, activate conda environment where azureml-core is installed.
189
198
2. Enter `pip freeze` and look for `PyJWT`, if found, the version listed should be < 2.0.0
190
199
3. If the listed version is not a supported version, `pip uninstall PyJWT` in the command shell and enter y for confirmation.
191
200
4. Install using `pip install 'PyJWT<2.0.0'`
192
201
193
-
If you are submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
202
+
If . you're submitting a user-created environment with your job, consider using the latest version of azureml-core in that environment. Versions >= 1.18.0 of azureml-core already pin PyJWT < 2.0.0. If you need to use a version of azureml-core < 1.18.0 in the environment you submit, make sure to specify PyJWT < 2.0.0 in your pip dependencies.
194
203
195
204
196
-
***ModuleErrors (No module named)**: If you are running into ModuleErrors while submitting experiments in Azure Machine Learning, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure Machine Learning installs the package in the environment used for your training job.
205
+
***ModuleErrors (No module named)**: If . you're running into ModuleErrors while submitting experiments in Azure Machine Learning, the training script is expecting a package to be installed but it isn't added. Once you provide the package name, Azure Machine Learning installs the package in the environment used for your training job.
197
206
198
-
If you are using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure Machine Learning Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
207
+
If . you're using Estimators to submit experiments, you can specify a package name via `pip_packages` or `conda_packages` parameter in the estimator based on from which source you want to install the package. You can also specify a yml file with all your dependencies using `conda_dependencies_file`or list all your pip requirements in a txt file using `pip_requirements_file` parameter. If you have your own Azure Machine Learning Environment object that you want to override the default image used by the estimator, you can specify that environment via the `environment` parameter of the estimator constructor.
199
208
200
209
Azure Machine Learning maintained docker images and their contents can be seen in [Azure Machine Learning Containers](https://github.com/Azure/AzureML-Containers).
201
210
Framework-specific dependencies are listed in the respective framework documentation:
0 commit comments