You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-convert-ml-experiment-to-production.md
+19-10Lines changed: 19 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -389,28 +389,37 @@ class TestTrain:
389
389
390
390
If you have been following the steps in this guide, you'll have a set of scripts that correlate to the train/score/test scripts available in the MLOpsPython repository. According to the structure mentioned above, the following steps will walk through what is needed to use these files for your own machine learning project:
391
391
392
-
1. Follow the Getting Started Guide
393
-
2. Replace the Training Code
394
-
3. Replace the Score Code
395
-
4. Update the Evaluation Code
392
+
1. Follow the MLOpsPython [Getting Started](https://github.com/microsoft/MLOpsPython/blob/master/docs/getting_started.md) guide
393
+
2. Follow the MLOpsPython [bootstrap instructions](https://github.com/microsoft/MLOpsPython/blob/master/bootstrap/README.md) to create your project starting point
394
+
3. Replace the Training Code
395
+
4. Replace the Score Code
396
+
5. Update the Evaluation Code
396
397
397
398
### Follow the Getting Started Guide
398
-
Following the getting started guide is necessary to have the supporting infrastructure and pipelines to execute MLOpsPython. We recommended deploying the MLOpsPython code as-is before putting in your own code to ensure the structure and pipeline is working properly. It's also useful to familiarize yourself with the code structure of the repository.
399
+
Following the [Getting Started](https://github.com/microsoft/MLOpsPython/blob/master/docs/getting_started.md) guide is necessary to have the supporting infrastructure and pipelines to execute MLOpsPython.
400
+
401
+
### Follow the Bootstrap Instructions
402
+
403
+
The [Bootstrap from MLOpsPython repository](https://github.com/microsoft/MLOpsPython/blob/master/bootstrap/README.md) guide will help you to quickly prepare the repository for your project.
404
+
405
+
**Note:** Since the bootstrap script will rename the diabetes_regression folder to the project name of your choice, we'll refer to your project as `[project name]` when paths are involved.
399
406
400
407
### Replace Training Code
401
408
402
409
Replacing the code used to train the model and removing or replacing corresponding unit tests is required for the solution to function with your own code. Follow these steps specifically:
403
410
404
-
1. Replace `diabetes_regression\training\train.py`. This script trains your model locally or on the Azure ML compute.
405
-
1. Remove or replace training unit tests found in `tests/unit/code_test.py`
411
+
1. Replace `[project name]/training/train.py`. This script trains your model locally or on the Azure ML compute.
412
+
1. Remove or replace training unit tests found in `[project name]/training/test_train.py`
406
413
407
414
### Replace Score Code
408
-
For the model to provide real-time inference capabilities, the score code needs to be replaced. The MLOpsPython template uses the score code to deploy the model to do real-time scoring on ACI, AKS, or Web apps. If you want to keep scoring, replace `diabetes_regression/scoring/score.py`.
415
+
416
+
For the model to provide real-time inference capabilities, the score code needs to be replaced. The MLOpsPython template uses the score code to deploy the model to do real-time scoring on ACI, AKS, or Web apps. If you want to keep scoring, replace `[project name]/scoring/score.py`.
409
417
410
418
### Update Evaluation Code
411
-
The MLOpsPython template uses the evaluate_model script to compare the performance of the newly trained model and the current production model based on Mean Squared Error. If the performance of the newly trained model is better than the current production model, then the pipelines continue. Otherwise, the pipelines are stopped. To keep evaluation, replace all instances of `mse` in `diabetes_regression/evaluate/evaluate_model.py` with the metric that you want.
412
419
413
-
To get rid of evaluation, set the DevOps pipeline variable `RUN_EVALUATION` in `.pipelines\diabetes_regression-variables` to `false`.
420
+
The MLOpsPython template uses the evaluate_model script to compare the performance of the newly trained model and the current production model based on Mean Squared Error. If the performance of the newly trained model is better than the current production model, then the pipelines continue. Otherwise, the pipelines are canceled. To keep evaluation, replace all instances of `mse` in `[project name]/evaluate/evaluate_model.py` with the metric that you want.
421
+
422
+
To get rid of evaluation, set the DevOps pipeline variable `RUN_EVALUATION` in `.pipelines/[project name]-variables-template.yml` to `false`.
0 commit comments