You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**Azure Resource Health** - Anyone with reader permissions or above can see Azure health alerts, as well as configure personalized alerts via email, SMS, etc. See [Create Service Health Alerts](/azure/service-health/alerts-activity-log-service-notifications-portal)
42
+
-**Email** - email notifications are automatically sent to subscription owners. Any individual with reader permissions may however configure their own alerts by following the guidance above.
48
43
49
44
## Model availability
50
45
@@ -73,14 +68,6 @@ Be aware of the following:
73
68
1. For example if `gpt-35-turbo 0125` or `gpt-4o (2024-05-13)` is updated to a future version, or
74
69
2. for model family changes beyond version updates, such as when moving from `gpt-4 1106-preview` to `gpt-4o (2024-05-13)`.
75
70
76
-
### Who is notified of upcoming retirements
77
-
78
-
Azure OpenAI notifies customers via two methods:
79
-
-**Azure Resource Health** - Anyone with reader permissions or above can see Azure health alerts, as well as configure personalized alerts via email, SMS, etc. See [Create Service Health Alerts](/azure/service-health/alerts-activity-log-service-notifications-portal)
80
-
-**Email** - email notifications are automatically sent to subscription owners. Any individual with reader permissions may however configure their own alerts by following the guidance above.
81
-
82
-
83
-
84
71
## How to get ready for model retirements and version upgrades
85
72
86
73
To prepare for model retirements and version upgrades, we recommend that customers test their applications with the new models and versions and evaluate their behavior. We also recommend that customers update their applications to use the new models and versions before the retirement date.
@@ -91,6 +78,17 @@ For information on the model upgrade process, see [How to upgrade to a new model
91
78
92
79
For more information on how to manage model upgrades and migrations for provisioned deployments, see [Managing models on provisioned deployment types](../how-to/working-with-models.md#managing-models-on-provisioned-deployment-types)
93
80
81
+
## Current models
82
+
83
+
> [!NOTE]
84
+
> Not all models go through a deprecation period prior to retirement. Some models/versions only have a retirement date.
85
+
>
86
+
> **Fine-tuned models** are subject to a [different](#fine-tuned-models) deprecation and retirement schedule from their equivalent base model.
87
+
88
+
These models are currently available for use in Azure OpenAI.
To track individual updates to this article refer to the [Git History](https://github.com/MicrosoftDocs/azure-ai-docs/commits/main/articles/ai-services/openai/includes/retirement/models.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/batch.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ description: Learn how to use global batch with Azure OpenAI
5
5
author: mrbullwinkle
6
6
ms.author: mbullwin
7
7
manager: nitinme
8
-
ms.date: 05/28/2025
8
+
ms.date: 06/19/2025
9
9
ms.service: azure-ai-openai
10
10
ms.topic: how-to
11
11
ms.custom:
@@ -232,6 +232,8 @@ When a job failure occurs, you'll find details about the failure in the `errors`
232
232
|`empty_batch` | Please check your input file to ensure that the custom ID parameter is unique for each request in the batch.|
233
233
|`model_mismatch`| The Azure OpenAI model deployment name that was specified in the `model` property of this request in the input file doesn't match the rest of the file.<br><br>Please ensure that all requests in the batch point to the same Azure OpenAI in Azure AI Foundry Models model deployment in the `model` property of the request.|
234
234
|`invalid_request`| The schema of the input line is invalid or the deployment SKU is invalid. <br><br>Please ensure the properties of the request in your input file match the expected input properties, and that the Azure OpenAI deployment SKU is `globalbatch` for batch API requests.|
235
+
| `input_modified` |Blob input has been modified after the batch job has been submitted. |
236
+
| `input_no_permissions` | It's not possible to access the input blob. Please check [permissions](./role-based-access-control.md) and network access between the Azure OpenAI account and Azure Storage account. |
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-stt.md
+8-2Lines changed: 8 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,19 @@ ms.custom: references_regions
9
9
10
10
### June 2025 release
11
11
12
+
#### Improved pronunciation assessment model
13
+
14
+
We've rolled out significant upgrades to the pronunciation assessment models for `ta-IN` and `ms-MY`. You'll see a noticeable jump in Pearson Correlation Coefficients (PCC), which means more precise and dependable evaluations.
15
+
16
+
These updated models are ready to use through the API and the Azure AI Foundry playground, just like before.
17
+
12
18
#### Improved speech to text models
13
19
Accuracy of speech to text models in [fast transcription](../../fast-transcription-create.md) for `de-DE`, `en-US`, `en-GB`, `es-ES`, `es-MX`, `fr-FR`, `it-IT`, `ja-JP`, `ko-KR`, `pt-BR`, and `zh-CN` locales are improved by 10%-25% percent respectively, particularly with improved readaibility and recognition on entities.
14
20
15
21
### May 2025 release
16
22
17
23
#### Improved speech to text models
18
-
Accuracy of speech to text models for `ta-IN`, `te-IN`, `en-IN`, and `hu-HU` locales are improved by 5-10 percent respectively. We also approximate a 20x reduction in ghost words for the `ta-IN` and `te-IN` models.
24
+
Accuracy of speech to text models for `ta-IN`, `te-IN`, `en-IN`, and `hu-HU` locales are improved by 5-10 percent respectively. We also approximate a 20x reduction in ghost words for the `ta-IN` and `te-IN` models.
19
25
20
26
#### Fast transcription API - Multi-lingual speech transcription
21
27
@@ -30,7 +36,7 @@ Fast transcription now supports additional locales including fi-FI, he-IL, id-ID
30
36
31
37
We are excited to announce substantial improvements to our pronunciation assessment models for these locales: `de-DE`, `es-MX`, `it-IT`, `ja-JP`, `ko-KR`, and `pt-BR`. These enhancements bring significant advancements in Pearson Correlation Coefficients (PCC), ensuring more accurate and reliable assessments.
32
38
33
-
As before, the models are available through the API and Azure AI Foundry playground.
39
+
As before, the models are available through the API and Azure AI Foundry playground.
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-automated-ml.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -171,7 +171,7 @@ Enable this setting with:
171
171
Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your job. Automated machine learning uses both voting and stacking ensemble methods for combining models:
172
172
173
173
***Voting**: Predicts based on the weighted average of predicted class probabilities (for classification tasks) or predicted regression targets (for regression tasks).
174
-
***Stacking**: Combines heterogenous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.
174
+
***Stacking**: Combines heterogeneous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.
175
175
176
176
The [Caruana ensemble selection algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf) with sorted ensemble initialization is used to decide which models to use within the ensemble. At a high level, this algorithm initializes the ensemble with up to five models with the best individual scores, and verifies that these models are within 5% threshold of the best score to avoid a poor initial ensemble. Then for each ensemble iteration, a new model is added to the existing ensemble and the resulting score is calculated. If a new model improved the existing ensemble score, the ensemble is updated to include the new model.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-responsible-ai-insights-ui.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,7 +111,7 @@ Component parameters for real-life interventions use causal analysis. Do the fol
111
111
1.**Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
112
112
1.**Treatment features (required)**: Choose one or more features that you're interested in changing ("treating") to optimize the target outcome.
113
113
1.**Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata.
114
-
1.**Advanced settings**: Specify additional parameters for your causal analysis, such as heterogenous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
114
+
1.**Advanced settings**: Specify additional parameters for your causal analysis, such as heterogeneous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-pipeline-component.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ To access components in Azure Machine Learning studio, you need to register the
90
90
91
91
You reference pipeline components as child jobs in a pipeline job just like you reference other types of components. You can provide runtime settings like `default_datastore` and `default_compute` at the pipeline job level.
92
92
93
-
You need to promote any parameters you want to change during runtime as pipeline job inputs. Otherwise, they're hard-coded in the pipeline component. Promoting compute definition to a pipeline level input supports heterogenous pipelines that can use different compute targets in different steps.
93
+
You need to promote any parameters you want to change during runtime as pipeline job inputs. Otherwise, they're hard-coded in the pipeline component. Promoting compute definition to a pipeline level input supports heterogeneous pipelines that can use different compute targets in different steps.
94
94
95
95
To submit the pipeline job, edit the `cpu-cluster` in the `default_compute` section before you run the `az ml job create -f pipeline.yml` command.
0 commit comments