You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-automated-ml.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,11 +172,13 @@ See how to convert to ONNX format [in this Jupyter notebook example](https://git
172
172
173
173
The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [inferencing ONNX models with the ONNX runtime C# API](https://github.com/Microsoft/onnxruntime/blob/master/docs/CSharp_API.md).
174
174
175
-
## <aname="local-remote"></a>Guidance on local vs. remote compute targets
175
+
## <aname="local-remote"></a>Guidance on local vs. remote Azure ML compute targets
176
176
177
-
When using Azure Automated ML (with the Python SDK), you choose either a local compute or a remote compute target to train your models.
177
+
When using Azure Automated ML (with the Python SDK), you choose either a local compute or a remote compute target to train your models.
178
+
***Local compute**: Training occurs on your local laptop or VM compute.
179
+
***Remote compute**: Training occurs on an Azure Managed compute clusters.
178
180
179
-
More features are available when you use the remote compute, as shown in the table below. However, when you use a remote compute, such as Azure Managed compute clusters, factor in setup time. The internal infrastructure preparation time will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
181
+
More features are available when you use the remote compute, as shown in the table below. However, when you use a remote compute, you'll need to factor in setup time. The internal infrastructure preparation time will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
180
182
181
183
If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice.
182
184
@@ -185,7 +187,7 @@ But, if you are training with larger datasets like in production training creati
185
187
|| Best for | Pros (Advantages) |Cons (Handicaps) |
186
188
|---------|---------|---------|---------|
187
189
|Local compute target | Small data, quick demos | No environment setup time, infrastructure resources (your PC or VM) are directly available | Subset of features<br/> Can't parallelize runs <br/>Worse for large data. No data streaming while training. <br/> No DNN-based featurization <br/> Python SDK only |
188
-
|Remote compute target| Larger data, production data, production training | Full set of features <br/> Parallelize child runs <br/> Large data support <br/> DNN-based featurization <br/> Dynamic scalability of compute cluster on demand <br/>No-code experience (web UI) also available in Azure ML | Start up time for cluster nodes <br/> Start up time for each child run |
190
+
|Remote Azure ML compute target| Larger data, production data, production training | Full set of features <br/> Parallelize child runs <br/> Large data support <br/> DNN-based featurization <br/> Dynamic scalability of compute cluster on demand <br/>No-code experience (web UI) also available in Azure ML | Start up time for cluster nodes <br/> Start up time for each child run |
189
191
190
192
The following table shows a summary of features available on local and remote compute targets. Some of these features are available only in an Enterprise workspace.
0 commit comments