You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
+ Use v1.48.0 or below to continue using these algorithms.
@@ -60,7 +60,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
60
60
+**azureml-contrib-automl-dnn-forecasting**
61
61
+ Nonscalar metrics for TCNForecaster will now reflect values from the last epoch.
62
62
+ Forecast horizon visuals for train-set and test-set are now available while running the TCN training experiment.
63
-
+ Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training did not converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
63
+
+ Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training didn't converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
64
64
+**azureml-core**
65
65
+ Azure Machine Learning workspace creation makes use of Log Analytics Based Application Insights in preparation for deprecation of Classic Application Insights. Users wishing to use Classic Application Insights resources can still specify their own to bring when creating an Azure Machine Learning workspace.
66
66
+**azureml-interpret**
@@ -96,7 +96,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
96
96
+ Added model serializer and pyfunc model to azureml-responsibleai package for saving and retrieving models easily
97
97
+**azureml-train-automl-runtime**
98
98
+ Added docstring for ManyModels Parameters and HierarchicalTimeSeries Parameters
99
-
+ Fixed bug where generated code does not do train/test splits correctly.
99
+
+ Fixed bug where generated code doesn't do train/test splits correctly.
100
100
+ Fixed a bug that was causing forecasting generated code training jobs to fail.
101
101
102
102
## 2022-10-25
@@ -117,7 +117,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
117
117
+**azureml-automl-dnn-nlp**
118
118
+ Customers will no longer be allowed to specify a line in CoNLL, which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label.
119
119
+**azureml-contrib-automl-dnn-forecasting**
120
-
+ There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
120
+
+ There's a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
121
121
+**azureml-core**
122
122
+ Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less.
123
123
+ The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
@@ -223,7 +223,7 @@ __RSS feed__: Get notified when this page is updated by copying and pasting the
223
223
+ Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset.
224
224
+**azureml-train-automl-runtime**
225
225
+ Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
226
-
+ Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
226
+
+ Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature isn't supported when TCN is enabled.
227
227
+ Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary.
228
228
+ Enabled forecasting model endpoints with quantiles support to be consumed in Power BI.
229
229
+ Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
@@ -245,7 +245,7 @@ This breaking change comes from the June release of `azureml-inference-server-ht
245
245
+**azureml-interpret**
246
246
+ updated azureml-interpret package to interpret-community 0.25.0
247
247
+**azureml-pipeline-core**
248
-
+Do not print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
248
+
+Don't print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
249
249
+**azureml-train-automl-runtime**
250
250
+ Fixes a bug that would cause code generation to fail when the azureml-contrib-automl-dnn-forecasting package is present in the training environment.
251
251
+ Fix error when using a test dataset without a label column with AutoML Model Testing.
@@ -2387,7 +2387,7 @@ Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Moved from`azureml-contrib-datadrift` into `azureml-datadrift`
2389
2389
+ Added support for monitoring time series datasets for drift and other statistical measures
2390
-
+ New methods `create_from_model()`and`create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method will be deprecated.
2390
+
+ New methods `create_from_model()`and`create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method is deprecated.
2391
2391
+ Adjustments to the visualizations in Python andUIin the Azure Machine Learning studio.
2392
2392
+ Support weekly and monthly monitor scheduling, in addition to daily for dataset monitors.
2393
2393
+ Support backfill of data monitor metrics to analyze historical data for dataset monitors.
@@ -2867,7 +2867,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
2867
2867
2868
2868
+**New features**
2869
2869
+ You can now request to execute specific inspectors (for example, histogram, scatter plot, etc.) on specific columns.
2870
-
+ Added a parallelize argument to `append_columns`. If True, data will be loaded into memory but execution will run in parallel; ifFalse, execution is streaming but single-threaded.
2870
+
+ Added a parallelize argument to `append_columns`. If True, data is loaded into memory but execution will run in parallel; ifFalse, execution is streaming but single-threaded.
2871
2871
2872
2872
## 2019-07-23
2873
2873
@@ -2891,7 +2891,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
2891
2891
+ Forecasting now allows different frequencies in train and test sets if they can be aligned. For example, "quarterly starting in January"and at "quarterly starting in October" can be aligned.
2892
2892
+ The property"parameters" was added to the TimeSeriesTransformer.
2893
2893
+ Remove old exception classes.
2894
-
+ In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag will be created. If a listis provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
2894
+
+ In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag is created. If a listis provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
2895
2895
+ Fix the bug about losing columns types after the transformation (bug linked);
2896
2896
+ In `model.forecast(X, y_query)`, allow y_query to be an objecttype containing None(s) at the begin (#459519).
2897
2897
+ Add expected values to `automl` output
@@ -2919,7 +2919,7 @@ At the time, of this release, the following browsers are supported: Chrome, Fire
2919
2919
+ Add support for token authentication inAKS webservices.
2920
2920
+ Add `get_token()` method to `Webservice` objects.
2921
2921
+ Added CLI support to manage machine learning datasets.
2922
-
+`Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads will be done from blob storage/network rather than the local cache, which negatively impacts training times.
2922
+
+`Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads are done from blob storage/network rather than the local cache, which negatively impacts training times.
2923
2923
+ Model description can now properly be updated after registration
2924
2924
+ Model and Image deletion now provides more information about upstream objects that depend on them, which causes the delete to fail
2925
2925
+ Improve resource utilization of remote runs using azureml.mlflow.
0 commit comments