Skip to content

Commit eb4ffca

Browse files
authored
Update azure-machine-learning-release-notes.md
1 parent 4aadc69 commit eb4ffca

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

articles/machine-learning/v1/azure-machine-learning-release-notes.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
5050

5151
### Azure Machine Learning SDK for Python v1.49.0
5252
+ **Breaking changes**
53-
+ Starting with v1.49.0 and above, the following AutoML algorithms will not be supported.
53+
+ Starting with v1.49.0 and above, the following AutoML algorithms won't be supported.
5454
+ Regression: FastLinearRegressor, OnlineGradientDescentRegressor
5555
+ Classification: AveragedPerceptronClassifier.
5656
+ Use v1.48.0 or below to continue using these algorithms.
@@ -60,7 +60,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
6060
+ **azureml-contrib-automl-dnn-forecasting**
6161
+ Nonscalar metrics for TCNForecaster will now reflect values from the last epoch.
6262
+ Forecast horizon visuals for train-set and test-set are now available while running the TCN training experiment.
63-
+ Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training did not converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
63+
+ Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training didn't converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
6464
+ **azureml-core**
6565
+ Azure Machine Learning workspace creation makes use of Log Analytics Based Application Insights in preparation for deprecation of Classic Application Insights. Users wishing to use Classic Application Insights resources can still specify their own to bring when creating an Azure Machine Learning workspace.
6666
+ **azureml-interpret**
@@ -96,7 +96,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
9696
+ Added model serializer and pyfunc model to azureml-responsibleai package for saving and retrieving models easily
9797
+ **azureml-train-automl-runtime**
9898
+ Added docstring for ManyModels Parameters and HierarchicalTimeSeries Parameters
99-
+ Fixed bug where generated code does not do train/test splits correctly.
99+
+ Fixed bug where generated code doesn't do train/test splits correctly.
100100
+ Fixed a bug that was causing forecasting generated code training jobs to fail.
101101

102102
## 2022-10-25
@@ -117,7 +117,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
117117
+ **azureml-automl-dnn-nlp**
118118
+ Customers will no longer be allowed to specify a line in CoNLL, which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label.
119119
+ **azureml-contrib-automl-dnn-forecasting**
120-
+ There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
120+
+ There's a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
121121
+ **azureml-core**
122122
+ Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less.
123123
+ The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
@@ -223,7 +223,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
223223
+ Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset.
224224
+ **azureml-train-automl-runtime**
225225
+ Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
226-
+ Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
226+
+ Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature isn't supported when TCN is enabled.
227227
+ Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
228228
+ Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
229229
+ Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
@@ -245,7 +245,7 @@ This breaking change comes from the June release of `azureml-inference-server-ht
245245
+ **azureml-interpret**
246246
+ updated azureml-interpret package to interpret-community 0.25.0
247247
+ **azureml-pipeline-core**
248-
+ Do not print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
248+
+ Don't print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
249249
+ **azureml-train-automl-runtime**
250250
+ Fixes a bug that would cause code generation to fail when the azureml-contrib-automl-dnn-forecasting package is present in the training environment.
251251
+ Fix error when using a test dataset without a label column with AutoML Model Testing.
@@ -2387,7 +2387,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
23872387
+ [**azureml-datadrift**](/python/api/azureml-datadrift)
23882388
+ Moved from `azureml-contrib-datadrift` into `azureml-datadrift`
23892389
+ Added support for monitoring time series datasets for drift and other statistical measures
2390-
+ New methods `create_from_model()` and `create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method will be deprecated.
2390+
+ New methods `create_from_model()` and `create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method is deprecated.
23912391
+ Adjustments to the visualizations in Python and UI in the Azure Machine Learning studio.
23922392
+ Support weekly and monthly monitor scheduling, in addition to daily for dataset monitors.
23932393
+ Support backfill of data monitor metrics to analyze historical data for dataset monitors.
@@ -2867,7 +2867,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
28672867

28682868
+ **New features**
28692869
+ You can now request to execute specific inspectors (for example, histogram, scatter plot, etc.) on specific columns.
2870-
+ Added a parallelize argument to `append_columns`. If True, data will be loaded into memory but execution will run in parallel; if False, execution is streaming but single-threaded.
2870+
+ Added a parallelize argument to `append_columns`. If True, data is loaded into memory but execution will run in parallel; if False, execution is streaming but single-threaded.
28712871

28722872
## 2019-07-23
28732873

@@ -2891,7 +2891,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
28912891
+ Forecasting now allows different frequencies in train and test sets if they can be aligned. For example, "quarterly starting in January" and at "quarterly starting in October" can be aligned.
28922892
+ The property "parameters" was added to the TimeSeriesTransformer.
28932893
+ Remove old exception classes.
2894-
+ In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag will be created. If a list is provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
2894+
+ In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag is created. If a list is provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
28952895
+ Fix the bug about losing columns types after the transformation (bug linked);
28962896
+ In `model.forecast(X, y_query)`, allow y_query to be an object type containing None(s) at the begin (#459519).
28972897
+ Add expected values to `automl` output
@@ -2919,7 +2919,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
29192919
+ Add support for token authentication in AKS webservices.
29202920
+ Add `get_token()` method to `Webservice` objects.
29212921
+ Added CLI support to manage machine learning datasets.
2922-
+ `Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads will be done from blob storage/network rather than the local cache, which negatively impacts training times.
2922+
+ `Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads are done from blob storage/network rather than the local cache, which negatively impacts training times.
29232923
+ Model description can now properly be updated after registration
29242924
+ Model and Image deletion now provides more information about upstream objects that depend on them, which causes the delete to fail
29252925
+ Improve resource utilization of remote runs using azureml.mlflow.

0 commit comments

Comments
 (0)