You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/document-intelligence/concept-add-on-capabilities.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -298,7 +298,7 @@ Query fields are an add-on capability to extend the schema extracted from any pr
298
298
299
299
> [!NOTE]
300
300
>
301
-
> Document Intelligence Studio query field extraction is currently available with the Layout and Prebuilt models starting with the `2023-10-31-preview` API and later releases.
301
+
> Document Intelligence Studio query field extraction is currently available with the Layout and Prebuilt models starting with the `2023-10-31-preview` API and later releases except for the ```us.tax.*``` models (W2, 1098s and 1099s models).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
+9-20Lines changed: 9 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -124,13 +124,14 @@ Otherwise, you see a list of your recent automated ML experiments, including th
124
124
------|------
125
125
Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
126
126
Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
127
-
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
127
+
Blocked models| Select models you want to exclude from the training job. <br><br> Allowing models is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
128
128
Explain best model| Automatically shows explainability on the best model created by Automated ML.
129
+
Positive class label| Label that Automated ML will use to calculate binary metrics.
129
130
130
131
131
132
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
132
133
133
-

134
+

134
135
135
136
1. The **[Optional] Limits** form allows you to do the following.
136
137
@@ -163,7 +164,7 @@ b. Provide a test dataset (preview) to evaluate the recommended model that autom
163
164
* The test dataset shouldn't be the same as the training dataset or the validation dataset.
164
165
* Forecasting jobs don't support train/test split.
165
166
166
-

167
+

167
168
168
169
## Customize featurization
169
170
@@ -173,11 +174,10 @@ The following table summarizes the customizations currently available via the st
173
174
174
175
Column| Customization
175
176
---|---
176
-
Included | Specifies which columns to include for training.
177
177
Feature type| Change the value type for the selected column.
178
178
Impute with| Select what value to impute missing values with in your data.
179
179
180
-

180
+

181
181
182
182
## Run experiment and view results
183
183
@@ -192,24 +192,13 @@ The **Job Detail** screen opens to the **Details** tab. This screen shows you a
192
192
193
193
The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they're added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
Drill down on any of the completed models to see training job details. In the **Model** tab, you can view details like a model summary and the hyperparameters used for the selected model.
On the Data transformation tab, you can see a diagram of what data preprocessing, feature engineering, scaling techniques and the machine learning algorithm that were applied to generate this model.
208
-
209
-
>[!IMPORTANT]
210
-
> The Data transformation tab is in preview. This capability should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) and may change at any time.
199
+
You can see model specific performance metric charts on the **Metrics** tab. [Learn more about charts](how-to-understand-automated-ml.md).
This is also where you can find details on all the properties of the model along with associated code, child jobs, and images.
213
202
214
203
## View remote test job results (preview)
215
204
@@ -280,7 +269,7 @@ To generate a Responsible AI dashboard for a particular model,
280
269
281
270
282
271
283
-

272
+

284
273
285
274
3. Proceed to the **Compute** page of the setup form and choose the **Serverless** option for your compute.
0 commit comments