You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-automated-ml.md
+10-5Lines changed: 10 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -142,14 +142,20 @@ See the [how-to](how-to-configure-auto-train.md#ensemble) for changing default e
142
142
143
143
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. The ONNX runtime supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Try an example of this flow [in this Jupyter notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
144
144
145
-
## <aname="local-remote"></a>Local and remote compute targets
145
+
## <aname="local-remote"></a>Guidance on local vs. remote compute targets
146
146
147
-
With the Azure automated ML, you choose either a local computer or a remote compute target to train your models. More features are available when you use the remote compute. If you have small data and want to get started quickly, training on your local computer is your best choice. When you use a remote compute, such as Azure Managed compute clusters, factor in setup time. Setup time can be around 2 minutes per child run, plus additional minutes for the cluster infrastructure if they are not yet up and running.
147
+
When using Azure Automated ML (with the Python SDK), you choose either a local compute or a remote compute target to train your models.
148
+
149
+
More features are available when you use the remote compute, as shown in the table below. However, when you use a remote compute, such as Azure Managed compute clusters, factor in setup time. The internal infrastructure preparation time will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
150
+
151
+
If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice.
152
+
153
+
But, if you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because AutoML will parallelize trains across the cluster's nodes.
148
154
149
155
|| Best for | Pros (Advantages) |Cons (Handicaps) |
150
156
|---------|---------|---------|---------|
151
-
|Local compute target | Small data, getting started, quick demos | No environment setup time, infrastructure resources are directly available | Subset of features<br/> Can't parallelize <br/> Can't use large data <br/> No DNN-based featurization <br/> Python SDK only |
152
-
|Remote compute target| Larger data, production data, production training | Full set of features <br/> Parallelize child runs <br/> Large data support <br/> DNN-based featurization <br/> Dynamic scalability of compute cluster on demand <br/>No-code experience also available in studio | Start up time for cluster nodes <br/> Start up time for each child run |
157
+
|Local compute target | Small data, quick demos | No environment setup time, infrastructure resources (your PC or VM) are directly available | Subset of features<br/> Can't parallelize runs <br/>Worse for large data. No data streaming while training. <br/> No DNN-based featurization <br/> Python SDK only |
158
+
|Remote compute target| Larger data, production data, production training | Full set of features <br/> Parallelize child runs <br/> Large data support <br/> DNN-based featurization <br/> Dynamic scalability of compute cluster on demand <br/>No-code experience (web UI) also available in Azure ML| Start up time for cluster nodes <br/> Start up time for each child run |
153
159
154
160
The following table shows a summary of features available on local and remote compute targets. Some of these features are available only in an Enterprise workspace.
155
161
@@ -160,7 +166,6 @@ The following table shows a summary of features available on local and remote co
160
166
| Feature engineering customization UI | ✓ |||
161
167
| Prophet or ARIMA models for forecasting | ✓ || ✓ |
162
168
| Multiple runs/iterations in parallel | ✓ || ✓ |
163
-
| Cancel an iteration | ✓ |||
164
169
| Continue a run | ✓ |||
165
170
| Create and run experiments in studio web experience | ✓ || ✓ |
0 commit comments