You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/azure-machine-learning-release-notes.md
+16-2Lines changed: 16 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,6 +25,20 @@ See [the list of known issues](resource-known-issues.md) to learn about known bu
25
25
+**azureml-automl-runtime**
26
26
+ AutoML Forecasting now supports customers forecast beyond the pre-specified max-horizon without re-training the model. When the forecast destination is farther into the future than the specified maximum horizon, the forecast() function will still make point predictions out to the later date using a recursive operation mode. For the illustration of the new feature, please see the "Forecasting farther than the maximum horizon" section of "forecasting-forecast-function" notebook in [folder](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)."
27
27
28
+
+**azureml-pipeline-steps**
29
+
+ ParallelRunStep is now released and part of azureml-pipeline-steps. Existing ParallelRunStep in azureml-contrib-pipeline-steps package is deprecated. Changes from public preview version:
30
+
+ Added `run_max_try` optional configurable parameter to control max call to run method for any given batch, default value is 3.
31
+
+ No PipelineParameters are auto-generated anymore. Following configurable values can be set as PipelineParameter explicitly.
32
+
+ mini_batch_size
33
+
+ node_count
34
+
+ process_count_per_node
35
+
+ logging_level
36
+
+ run_invocation_timeout
37
+
+ run_max_try
38
+
+ Default value for process_count_per_node is changed to 1. User should tune this value for better performance. Best practice is to set as the number of GPU or CPU node has.
39
+
+ ParallelRunStep does not inject any packages, user needs to include **azureml-core** and **azureml-dataprep[pandas, fuse]** packages in environment definition. If custom docker image is used with user_managed_dependencies then user need to install conda on the image.
40
+
41
+
28
42
+**Preview features**
29
43
+[Contrib features below]
30
44
@@ -63,8 +77,8 @@ See [the list of known issues](resource-known-issues.md) to learn about known bu
63
77
+ Added support for Windows services in ManagedInferencing
64
78
+ Remove old MIR workflows such as attach MIR compute, SingleModelMirWebservice class - Clean out model profiling placed in contrib-mir package
65
79
+**azureml-contrib-pipeline-steps**
66
-
+Quick fix for ParallelRunStep where loading from YAML was broken
67
-
+ ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps - new features include: 1. Datasets as PipelineParameter 2. New parameter run_max_retry 3. Configurable append_row output file name
80
+
+Minor fix for YAML support
81
+
+ ParallelRunStep is released to General Availability - azureml.contrib.pipeline.steps has a deprecation notice and is move to azureml.pipeline.steps
0 commit comments