Skip to content

Commit eaa190f

Browse files
committed
Merge branches 'patch-10' and 'patch-10' of github.com:shohei1029/azure-docs-pr into patch-10
2 parents d5e5fbe + b9e55c9 commit eaa190f

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

articles/machine-learning/how-to-migrate-from-v1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ If your team is only using Azure Machine Learning, you may consider provisioning
181181

182182
### Prototyping models
183183

184-
We recommend v2 for prototyping models. You may consider using the CLI for an interactive use of Azur ML, while your model training code is Python or any other programming language. Alternatively, you may adopt a full-stack approach with Python solely using the Azure Machine Learning SDK or a mixed approach with the Azure Machine Learning Python SDK and YAML files.
184+
We recommend v2 for prototyping models. You may consider using the CLI for an interactive use of Azure Machine Learning, while your model training code is Python or any other programming language. Alternatively, you may adopt a full-stack approach with Python solely using the Azure Machine Learning SDK or a mixed approach with the Azure Machine Learning Python SDK and YAML files.
185185

186186
### Production model training
187187

articles/machine-learning/how-to-mltable.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -250,7 +250,7 @@ In this scenario, Azure Machine Learning Tables, instead of Files or Folders, of
250250
|---------|---------|---------|---------|---------|
251251
|**File**<br>Reference a single file | `uri_file` | `FileDataset` | Read/write a single file - the file can have any format. | A type new to V2 APIs. In V1 APIs, files always mapped to a folder on the compute target filesystem; this mapping required an `os.path.join`. In V2 APIs, the single file is mapped. This way, you can refer to that location in your code. |
252252
|**Folder**<br> Reference a single folder | `uri_folder` | `FileDataset` | You must read/write a folder of parquet/CSV files into Pandas/Spark.<br><br>Deep-learning with images, text, audio, video files located in a folder. | In V1 APIs, `FileDataset` had an associated engine that could take a file sample from a folder. In V2 APIs, a Folder is a simple mapping to the compute target filesystem. |
253-
|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. This storage location meant that `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to AzureML* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. |
253+
|**Table**<br> Reference a data table | `mltable` | `TabularDataset` | You have a complex schema subject to frequent changes, or you need a subset of large tabular data.<br><br>AutoML with Tables. | In V1 APIs, the Azure Machine Learning back-end stored the data materialization blueprint. This storage location meant that `TabularDataset` only worked if you had an Azure Machine Learning workspace. `mltable` stores the data materialization blueprint in *your* storage. This storage location means you can use it *disconnected to Azure Machine Learning* - for example, local, on-premises. In V2 APIs, you'll find it easier to transition from local to remote jobs. |
254254

255255
## Installing the `mltable` library
256256
MLTable is pre-installed on Compute Instance, Azure Machine Learning Spark, and DSVM. You can install `mltable` Python library with this code:

articles/machine-learning/how-to-setup-mlops-azureml.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -329,7 +329,7 @@ This step deploys the training pipeline to the Azure Machine Learning workspace
329329
330330
* [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install)
331331
* [Install and set up Python CLI v2](how-to-configure-cli.md)
332-
* [AzureMLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) on GitHub
332+
* [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) on GitHub
333333
* Learn more about [Azure Pipelines with Azure Machine Learning](how-to-devops-machine-learning.md)
334334
* Learn more about [GitHub Actions with Azure Machine Learning](how-to-github-actions-machine-learning.md)
335335
* Deploy MLOps on Azure in Less Than an Hour - [Community MLOps V2 Accelerator video](https://www.youtube.com/watch?v=5yPDkWCMmtk)

articles/machine-learning/how-to-troubleshoot-online-endpoints.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -681,7 +681,7 @@ These are common error codes when consuming managed online endpoints with REST r
681681
| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
682682
| 429 | Too many pending requests | Your model is getting more requests than it can handle. Azure Machine Learning allows maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time and rejects extra requests. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, this error means that your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). Doing so can give the system time to adjust. Apart from enabling auto-scaling, you could also increase the number of instances by using the [code to calculate instance count](#how-to-calculate-instance-count). |
683683
| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints. |
684-
| 500 | Internal server error | AzureML-provisioned infrastructure is failing. |
684+
| 500 | Internal server error | Azure Machine Learning-provisioned infrastructure is failing. |
685685

686686
#### Common error codes for kubernetes online endpoints
687687

0 commit comments

Comments
 (0)