You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Second, the Jupyter code needs to be refactored into functions. Refactoring code into function makes unit testing easier and makes the code more maintainable. In this section, you'll refactor:
61
+
Second, the Jupyter code needs to be refactored into functions. Refactoring code into functions makes unit testing easier and makes the code more maintainable. In this section, you'll refactor:
62
62
- The Diabetes Ridge Regression Training Notebook(`experimentation/Diabetes Ridge Regression Training.ipynb`)
63
63
- The Diabetes Ridge Regression Scoring Notebook(`experimentation/Diabetes Ridge Regression Scoring.ipynb`)
The `train.py` file found in the `diabetes_regression/training directory` in the MLOpsPython repository supports command-line arguments (namely `build_id`, `model_name`, and `alpha`). Support for command-line arguments can be added to your `train.py` file to support dynamic model names and `alpha` values, but it's not necessary for the code to execute successfully.
281
+
The `train.py` file found in the `diabetes_regression/training` directory in the MLOpsPython repository supports command-line arguments (namely `build_id`, `model_name`, and `alpha`). Support for command-line arguments can be added to your `train.py` file to support dynamic model names and `alpha` values, but it's not necessary for the code to execute successfully.
282
282
283
283
### Create Python file for the Diabetes Ridge Regression Scoring Notebook
284
284
Covert your notebook to an executable script by running the following statement in a command prompt that which uses the nbconvert package and the path of `experimentation/Diabetes Ridge Regression Scoring.ipynb`:
@@ -379,16 +379,16 @@ If you have been following the steps in this guide, you'll have a set of scripts
379
379
Following the getting started guide is necessary to have the supporting infrastructure and pipelines to execute MLOpsPython. We recommended deploying the MLOpsPython code as-is before putting in your own code to ensure the structure and pipeline is working properly. It's also useful to familiarize yourself with the code structure of the repository.
380
380
381
381
### Replace Training Code
382
-
Replacing the code use to train the model and removing or replacing corresponding unit tests is required for the solution to function with your own code. Follow these steps specifically:
382
+
Replacing the code used to train the model and removing or replacing corresponding unit tests is required for the solution to function with your own code. Follow these steps specifically:
383
383
384
-
- Replace `code\training\train.py`. This script trains your model locally or on the Azure ML compute.
384
+
- Replace `diabetes_regression\training\train.py`. This script trains your model locally or on the Azure ML compute.
385
385
- Remove or replace training unit tests found in `tests/unit/code_test.py`
386
386
387
387
### Replace Score Code
388
-
For the model to provide real-time inference capabilities, the score code needs to be replaced. The MLOpsPython template uses the score code to deploy the model to do real-time scoring on ACI, AKS, or Web apps. If you want to keep scoring, replace `code/scoring/score.py`.
388
+
For the model to provide real-time inference capabilities, the score code needs to be replaced. The MLOpsPython template uses the score code to deploy the model to do real-time scoring on ACI, AKS, or Web apps. If you want to keep scoring, replace `diabetes_regression/scoring/score.py`.
389
389
390
390
### Update Evaluation Code
391
-
The MLOpsPython template uses the evaluate_model script to compare the performance of the newly trained model and the current production model based on Mean Squared Error. If the performance of the newly trained model is better than the current production model, then the pipelines continue. Otherwise, the pipelines are stopped. To keep evaluation, replace all instances of `mse`in`code/evaluate/evaluate_model.py`with the metric that you want.
391
+
The MLOpsPython template uses the evaluate_model script to compare the performance of the newly trained model and the current production model based on Mean Squared Error. If the performance of the newly trained model is better than the current production model, then the pipelines continue. Otherwise, the pipelines are stopped. To keep evaluation, replace all instances of `mse`in`diabetes_regression/evaluate/evaluate_model.py`with the metric that you want.
392
392
393
393
To get rid of evaluation, set the DevOps pipeline variable `RUN_EVALUATION`in`.pipelines\diabetes_regression-variables` to false.
394
394
@@ -398,5 +398,4 @@ Advance to the next article to learn how to create...
398
398
> [!div class="nextstepaction"]
399
399
> [Monitor Azure ML experiment runs and metrics](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments)
400
400
> [Monitor and collect data fromML web service endpoints](https://docs.microsoft.com/azure/machine-learning/how-to-enable-app-insight)
0 commit comments