You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -50,7 +50,11 @@ The following diagram illustrates that with MLflow Tracking, you track an experi
50
50
51
51
## Track local runs
52
52
53
-
MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your local runs into your Azure Machine Learning workspace.
53
+
MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts runs that were executed on your local machine into your Azure Machine Learning workspace.
54
+
55
+
### Set up tracking environment
56
+
57
+
To track a local run, you need to point your local machine to the Azure Machine Learning MLflow Tracking URI.
54
58
55
59
Import the `mlflow` and [`Workspace`](/python/api/azureml-core/azureml.core.workspace%28class%29) classes to access MLflow's tracking URI and configure your workspace.
Set the MLflow experiment name with `set_experiment()` and start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
72
+
### Set experiment name
73
+
74
+
All MLflow runs are logged to the active experiment, which can be set with the MLflow SDK or Azure CLI.
75
+
76
+
Set the MLflow experiment name with [`set_experiment()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_experiment) command.
69
77
70
78
```Python
71
79
experiment_name ='experiment_with_mlflow'
72
80
mlflow.set_experiment(experiment_name)
81
+
```
82
+
83
+
### Start training run
73
84
74
-
with mlflow.start_run():
75
-
mlflow.log_metric('alpha', 0.03)
85
+
After you set the MLflow experiment name, you can start your training run with `start_run()`. Then use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
86
+
87
+
```Python
88
+
import os
89
+
from random import random
90
+
91
+
with mlflow.start_run() as mlflow_run:
92
+
mlflow.log_param("hello_param", "world")
93
+
mlflow.log_metric("hello_metric", random())
94
+
os.system(f"echo 'hello world' > helloworld.txt")
95
+
mlflow.log_artifact("helloworld.txt")
76
96
```
77
97
78
98
## Track remote runs
@@ -81,45 +101,149 @@ Remote runs let you train your models on more powerful computes, such as GPU ena
81
101
82
102
MLflow Tracking with Azure Machine Learning lets you store the logged metrics and artifacts from your remote runs into your Azure Machine Learning workspace. Any run with MLflow Tracking code in it will have metrics logged automatically to the workspace.
83
103
84
-
The following example conda environment includes `mlflow` and `azureml-mlflow` as pip packages.
104
+
First, you should create a `src` subdirectory and create a file with your training code in a `train.py` file in the `src` subdirectory. All your training code will go into the `src` subdirectory, including `train.py`.
105
+
106
+
The training code is taken from this [MLflow example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/basics/src/hello-mlflow.py) in the Azure Machine Learning example repo.
107
+
108
+
Copy this code into the file:
109
+
110
+
```Python
111
+
# imports
112
+
import os
113
+
import mlflow
114
+
115
+
from random import random
116
+
117
+
# define functions
118
+
defmain():
119
+
mlflow.log_param("hello_param", "world")
120
+
mlflow.log_metric("hello_metric", random())
121
+
os.system(f"echo 'hello world' > helloworld.txt")
122
+
mlflow.log_artifact("helloworld.txt")
85
123
86
124
87
-
```yaml
88
-
name: sklearn-example
89
-
dependencies:
90
-
- python=3.6.2
91
-
- scikit-learn
92
-
- matplotlib
93
-
- numpy
94
-
- pip:
95
-
- azureml-mlflow
96
-
- mlflow
97
-
- numpy
125
+
# run functions
126
+
if__name__=="__main__":
127
+
# run main function
128
+
main()
98
129
```
99
130
100
-
In your script, configure your compute and training run environment with the [`Environment`](/python/api/azureml-core/azureml.core.environment.environment) class. Then, construct [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) with your remote compute as the compute target.
131
+
Load training script to submit an experiement.
101
132
102
133
```Python
103
-
import mlflow
134
+
script_dir ="src"
135
+
training_script ='train.py'
136
+
withopen("{}/{}".format(script_dir,training_script), 'r') as f:
137
+
print(f.read())
138
+
```
139
+
140
+
In your script, configure your compute and training run environment with the [`Environment`](/python/api/azureml-core/azureml.core.environment.environment) class.
141
+
142
+
```Python
143
+
from azureml.core import Environment
144
+
from azureml.core.conda_dependencies import CondaDependencies
145
+
146
+
env = Environment(name="mlflow-env")
104
147
105
-
with mlflow.start_run():
106
-
mlflow.log_metric('example', 1.23)
148
+
# Specify conda dependencies with scikit-learn and temporary pointers to mlflow extensions
Then, construct [`ScriptRunConfig`](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) with your remote compute as the compute target.
158
+
159
+
```Python
160
+
from azureml.core import ScriptRunConfig
161
+
162
+
src = ScriptRunConfig(source_directory="src",
163
+
script=training_script,
164
+
compute_target="<COMPUTE_NAME>",
165
+
environment=env)
107
166
```
108
167
109
168
With this compute and training run configuration, use the `Experiment.submit()` method to submit a run. This method automatically sets the MLflow tracking URI and directs the logging from MLflow to your Workspace.
The metrics and artifacts from MLflow logging are kept in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code.
183
+
The metrics and artifacts from MLflow logging are tracked in your workspace. To view them anytime, navigate to your workspace and find the experiment by name in your workspace in [Azure Machine Learning studio](https://ml.azure.com). Or run the below code.
184
+
185
+
Retrieve run metric using MLflow [get_run()](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.get_run).
To view the artifacts of a run, you can use [MlFlowClient.list_artifacts()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.list_artifacts)
210
+
211
+
```Python
212
+
client.list_artifacts(run_id)
213
+
```
214
+
215
+
To download an artifact to the current directory, you can use [MLFlowClient.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient.download_artifacts)
With Azure Machine Learning and MLFlow, users can log metrics, model parameters and model artifacts automatically when training a model. A [variety of popular machine learning libraries](https://mlflow.org/docs/latest/tracking.html#automatic-logging) are supported.
238
+
239
+
To enable [automatic logging](https://mlflow.org/docs/latest/tracking.html#automatic-logging) insert the following code before your training code:
240
+
241
+
```Python
242
+
mlflow.autolog()
243
+
```
244
+
245
+
[Learn more about Automatic logging with MLflow](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.autolog).
246
+
123
247
## Manage models
124
248
125
249
Register and track your models with the [Azure Machine Learning model registry](concept-model-management-and-deployment.md#register-package-and-deploy-models-from-anywhere), which supports the MLflow model registry. Azure Machine Learning models are aligned with the MLflow model schema making it easy to export and import these models across different workflows. The MLflow related metadata such as, run ID is also tagged with the registered model for traceability. Users can submit training runs, register, and deploy models produced from MLflow runs.
@@ -128,11 +252,13 @@ If you want to deploy and register your production ready model in one step, see
128
252
129
253
To register and view a model from a run, use the following steps:
130
254
131
-
1. Once the run is complete call the `register_model()` method.
255
+
1. Once a run is complete, call the [`register_model()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.register_model) method.
132
256
133
-
```python
134
-
# the model folder produced from the run is registered. This includes the MLmodel file, model.pkl and the conda.yaml.
0 commit comments