Skip to content

Commit b1e1792

Browse files
committed
Switch to table, add links
1 parent 761e373 commit b1e1792

File tree

2 files changed

+36
-39
lines changed

2 files changed

+36
-39
lines changed

articles/machine-learning/concept-automated-ml.md

Lines changed: 33 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -142,47 +142,40 @@ See the [how-to](how-to-configure-auto-train.md#ensemble) for changing default e
142142

143143
With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. The ONNX runtime supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Try an example of this flow [in this Jupyter notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).
144144

145-
## Local and remote compute targets
146-
147-
With the Python SDK, you can train your models on either a local computer or a remote compute target. More features are available when you use the remote compute. If you have small data and want to get started quickly, training on your local computer is your best choice. When you use a remote compute, such as Azure Managed compute clusters, factor in setup time:
148-
* Around 1 minute for each node in the cluster
149-
* Around 1.5 minutes for each child-run environment setup
145+
## <a name="local-remote"></a>Local and remote compute targets
150146

147+
With the Azure automated ML, you choose either a local computer or a remote compute target to train your models. More features are available when you use the remote compute. If you have small data and want to get started quickly, training on your local computer is your best choice. When you use a remote compute, such as Azure Managed compute clusters, factor in setup time. Setup time can be around 2 minutes per child run, plus additional minutes for the cluster infrastructure if they are not yet up and running.
151148

152149
| | Best for | Pros (Advantages) |Cons (Handicaps) |
153150
|---------|---------|---------|---------|
154-
|Local compute target | Small data, getting started, quick demo | Infrastructure resources are directly available, no environment setup time <br/> | Subset of features available for remote compute targets <br/> Cannot parallelize |
155-
|[Azure Machine Learning compute cluster](../articles/machine-learning/how-to-set-up-training-targets.md#amlcompute) | Larger data, production data, production training | Full set of features <br/> Scale out to multiple child runs <br/> Dynamic scalability of compute cluster based on demand | Start-up time for cluster nodes, start up for each child run |
156-
157-
158-
Use either a **local or a remote target** to:
159-
* Create and run experiments in notebooks
160-
* Register and visualize experiment's info and metrics in UI
161-
* Forecasting support
162-
* Create ONNX models
163-
* Ensemble iterations
164-
* Use subsampling
165-
* Apply data guardrails
166-
* View model interpretability in notebooks
167-
168-
Not sure where these go yet:
169-
* Prophet or ARIMA models for forecasting
170-
* Feature Sweeping (or advanced transformers)
171-
* Custom featurizers support
172-
* Feature engineering customization SDK
173-
* Many models training SDK
174-
175-
Switch to a **remote target** to add more flexibility and features:
176-
177-
* Multiple runs/iterations running in parallel
178-
* Cancel an iteration
179-
* Continue a run
180-
* Data streaming (Large data support, up to 100GB)
181-
* DNN-based text featurization
182-
* Feature engineering customization UI
183-
184-
@@Not sure this goes here...?
185-
You also use a remote compute target in the studio to build a [no-code automated machine learning](how-to-use-automated-ml-for-ml-models.md) and view [model intrpretability](@@Do we have an article that shows this?).
151+
|Local compute target | Small data, getting started, quick demos | No environment setup time, infrastructure resources are directly available | Subset of features<br/> Can't parallelize <br/> Can't use large data <br/> No DNN-based featurization <br/> Python SDK only |
152+
|Remote compute target| Larger data, production data, production training | Full set of features <br/> Parallelize child runs <br/> Large data support <br/> DNN-based featurization <br/> Dynamic scalability of compute cluster on demand <br/>No-code experience also available in studio | Start up time for cluster nodes <br/> Start up time for each child run |
153+
154+
The following table shows a summary of features available on local and remote compute targets. Some of these features are available only in an Enterprise workspace.
155+
156+
|Feature |Local |Remote |Requires Enterprise |
157+
|---------|---------|---------|---------|
158+
|Create and run experiments in notebooks ||| |
159+
|Register and visualize experiment's info and metrics in UI ||| |
160+
|Forecasting ||||
161+
|ONNX models ||| |
162+
|Ensemble iterations ||| |
163+
|Subsampling ||| |
164+
|Data guardrails ||| |
165+
|Model interpretability (in notebooks) ||||
166+
|Feature Sweeping (or advanced transformers) | ? | ? | |
167+
|Custom featurizers support | ? | ? | |
168+
|Feature engineering customization SDK | ? | ? ||
169+
|Many models training SDK | ? | ? | |
170+
|Prophet or ARIMA models for forecasting | |||
171+
|Multiple runs/iterations in parallel | |||
172+
|Cancel an iteration | || |
173+
|Continue a run | || |
174+
|Data streaming (Large data support, up to 100 GB) | |||
175+
|DNN-based text featurization | |||
176+
|Feature engineering customization UI | || |
177+
|Create and run experiments in studio web experience | |||
178+
186179

187180
## Automated ML in Azure Machine Learning
188181

@@ -249,7 +242,9 @@ Resume run| ✓|
249242

250243
See examples and learn how to build models using automated machine learning:
251244

252-
+ Follow the [Tutorial: Automatically train a regression model with Azure Machine Learning](tutorial-auto-train-models.md)
245+
+ Follow the [Tutorial: Automatically train a regression model with Azure Machine Learning](tutorial-auto-train-models.md)
246+
247+
+ Learn how to use a [remote compute target](how-to-auto-train-remote.md)
253248

254249
+ Configure the settings for automatic training experiment:
255250
+ In Azure Machine Learning studio, [use these steps](how-to-use-automated-ml-for-ml-models.md).

articles/machine-learning/how-to-auto-train-remote.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,9 @@ In this article, you learn how to build a model using automated ML with AmlCompu
2626

2727
## How does remote differ from local?
2828

29-
The tutorial "[Train a classification model with automated machine learning](tutorial-auto-train-models.md)" teaches you how to use a local computer to train a model with automated ML. The workflow when training locally also applies to remote targets as well. However, with remote compute, automated ML experiment iterations are executed asynchronously. This functionality allows you to cancel a particular iteration, watch the status of the execution, or continue to work on other cells in the Jupyter notebook. To train remotely, you first create a remote compute target such as AmlCompute. Then you configure the remote resource and submit your code there.
29+
More features are available when you use a remote compute target. For more details, see [Local and remote compute targets](concept-automated-ml.md#local-remote).
30+
31+
The tutorial "[Train a classification model with automated machine learning](tutorial-auto-train-models.md)" teaches you how to use a local computer to train a model with automated ML. The workflow when training locally also applies to remote targets as well. To train remotely, you first create a remote compute target such as AmlCompute. Then you configure the remote resource and submit your code there.
3032

3133
This article shows the extra steps needed to run an automated ML experiment on a remote AmlCompute target. The workspace object, `ws`, from the tutorial is used throughout the code here.
3234

0 commit comments

Comments
 (0)