You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-compute-target.md
+39-1Lines changed: 39 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,45 @@ The compute resources you use for your compute targets are attached to a [worksp
34
34
35
35
As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary.
**Compute targets can be reused from one training job to the next.** For example, after you attach a remote VM to your workspace, you can reuse it for multiple jobs.
38
+
:::moniker range="azureml-api-1"
39
+
For machine learning pipelines, use the appropriate [pipeline step](/python/api/azureml-pipeline-steps/azureml.pipeline.steps) for each compute target.
40
+
:::moniker-end
41
+
42
+
You can use any of the following resources for a training compute target for most jobs. Not all resources can be used for automated machine learning, machine learning pipelines, or designer. Azure Databricks can be used as a training resource for local runs and machine learning pipelines, but not as a remote target for other training.
> The compute instance has 120GB OS disk. If you run out of disk space, [use the terminal](~/articles/machine-learning/how-to-access-terminal.md) to clear at least 1-2 GB before you [stop or restart](~/articles/machine-learning/how-to-manage-compute-instance.md#manage) the compute instance.
0 commit comments