Skip to content

Commit b192392

Browse files
author
Larry Franks
committed
writing/deleting
1 parent b19c3e4 commit b192392

File tree

1 file changed

+1
-26
lines changed

1 file changed

+1
-26
lines changed

articles/machine-learning/concept-model-management-and-deployment.md

Lines changed: 1 addition & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Registered models are identified by name and version. Each time you register a m
7979
> * When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
8080
> * You can't delete a registered model that's being used in an active deployment.
8181
82-
For more information, [Work with models in Azure Machine Learning](how-to-manage-models.md).
82+
For more information, [Work with models in Azure Machine Learning](how-to-manage-model-cli.md).
8383

8484
### Package and debug models
8585

@@ -160,31 +160,6 @@ Machine Learning gives you the capability to track the end-to-end audit trail of
160160

161161
Machine Learning publishes key events to Azure Event Grid, which can be used to notify and automate on events in the machine learning lifecycle. For more information, see [Use Event Grid](how-to-use-event-grid.md).
162162

163-
## Monitor for operational and machine learning issues
164-
165-
Monitoring enables you to understand what data is being sent to your model, and the predictions that it returns.
166-
167-
This information helps you understand how your model is being used. The collected input data might also be useful in training future versions of the model.
168-
169-
For more information, see [Enable model data collection](v1/how-to-enable-data-collection.md) (Note this feature is only availabie in v1).
170-
171-
## Retrain your model on new data
172-
173-
Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](v1/how-to-monitor-datasets.md) (Note this feature is only availabie in v1), model performance can degrade because of:
174-
175-
- Changes to a particular sensor.
176-
- Natural data changes such as seasonal effects.
177-
- Features shifting in their relation to other features.
178-
179-
There's no universal answer to "How do I know if I should retrain?" The Machine Learning event and monitoring tools previously discussed are good starting points for automation. After you've decided to retrain, you should:
180-
181-
- Preprocess your data by using a repeatable, automated process.
182-
- Train your new model.
183-
- Compare the outputs of your new model to the outputs of your old model.
184-
- Use predefined criteria to choose whether to replace your old model.
185-
186-
A theme of the preceding steps is that your retraining should be automated, not improvised. [Machine Learning pipelines](concept-ml-pipelines.md) are a good answer for creating workflows that relate to data preparation, training, validation, and deployment. Read [Retrain models with Machine Learning designer](how-to-retrain-designer.md) to see how pipelines and the Machine Learning designer fit into a retraining scenario.
187-
188163
## Automate the machine learning lifecycle
189164

190165
You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.

0 commit comments

Comments
 (0)