You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Train ML models with MLflow Projects and Azure Machine Learning (Preview)
16
+
# Train with MLflow Projects in Azure Machine Learning (Preview)
17
17
18
18
In this article, learn how to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) that uses Azure Machine Learning workspaces for tracking. You can submit jobs and only track them with Azure Machine Learning or migrate your runs to the cloud to run completely on [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
19
19
@@ -59,86 +59,110 @@ This example shows how to submit MLflow projects and track them Azure Machine Le
59
59
60
60
1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
1. Submit the local run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit runs locally and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
67
-
68
-
69
-
# [MLflow CLI](#tab/cli)
70
-
71
-
```bash
72
-
mlflow run . --experiment-name --backend azureml --env-manager=local -P alpha=0.3
73
-
```
62
+
__conda.yaml__
63
+
64
+
```yaml
65
+
name: mlflow-example
66
+
channels:
67
+
- defaults
68
+
dependencies:
69
+
- numpy>=1.14.3
70
+
- pandas>=1.0.0
71
+
- scikit-learn
72
+
- pip:
73
+
- mlflow
74
+
- azureml-mlflow
75
+
```
76
+
77
+
1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
74
78
75
-
# [Python](#tab/sdk)
79
+
# [MLflow CLI](#tab/cli)
80
+
81
+
```bash
82
+
mlflow run . --experiment-name --backend azureml --env-manager=local -P alpha=0.3
83
+
```
76
84
77
-
```python
78
-
local_env_run = mlflow.projects.run(uri=".",
79
-
parameters={"alpha":0.3},
80
-
backend="azureml",
81
-
env_manager="local",
82
-
backend_config= backend_config,
83
-
)
84
-
85
-
```
86
-
---
85
+
# [Python](#tab/sdk)
86
+
87
+
```python
88
+
local_env_run = mlflow.projects.run(
89
+
uri=".",
90
+
parameters={"alpha":0.3},
91
+
backend = "azureml",
92
+
env_manager="local",
93
+
backend_config = backend_config,
94
+
)
95
+
```
96
+
97
+
---
87
98
88
-
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
99
+
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
89
100
90
101
## Train MLflow projects in Azure Machine Learning workspaces
91
102
92
103
This example shows how to submit MLflow projects on a remote compute with Azure Machine Learning tracking.
93
104
94
105
1. Create the backend configuration object, in this case we are going to indicate `COMPUTE`. This parameter references the name of your remote compute cluster you want to use for running your project. If `COMPUTE` is present, the project will be automatically submitted as an Azure Machine Learning job to the indicated compute.
95
106
96
-
# [MLflow CLI](#tab/cli)
107
+
# [MLflow CLI](#tab/cli)
97
108
98
-
__backend_config.json__
109
+
__backend_config.json__
99
110
100
-
```json
101
-
{
102
-
"COMPUTE": "cpu-cluster"
103
-
}
104
-
```
111
+
```json
112
+
{
113
+
"COMPUTE": "cpu-cluster"
114
+
}
115
+
116
+
```
105
117
106
-
# [Python](#tab/sdk)
118
+
# [Python](#tab/sdk)
107
119
108
-
```python
109
-
backend_config = {"COMPUTE": "cpu-cluster"}
110
-
```
120
+
```python
121
+
backend_config = {"COMPUTE": "cpu-cluster"}
122
+
```
111
123
112
124
1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
1. Submit the MLflow project and ensure you set the parameter `backend = "azureml"`. With this setting, you can submit your run to your remote compute and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
141
+
1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
119
142
120
-
# [MLflow CLI](#tab/cli)
143
+
# [MLflow CLI](#tab/cli)
121
144
122
-
```bash
123
-
mlflow run . --backend azureml --backend-config backend_config.json -P alpha=0.3
124
-
```
145
+
```bash
146
+
mlflow run . --backend azureml --backend-config backend_config.json -P alpha=0.3
147
+
```
125
148
126
-
# [Python](#tab/sdk)
149
+
# [Python](#tab/sdk)
127
150
128
-
```python
129
-
local_env_run = mlflow.projects.run(uri=".",
130
-
parameters={"alpha":0.3},
131
-
backend="azureml",
132
-
backend_config= backend_config,
133
-
)
134
-
135
-
```
136
-
---
151
+
```python
152
+
local_env_run = mlflow.projects.run(
153
+
uri=".",
154
+
parameters={"alpha":0.3},
155
+
backend = "azureml",
156
+
backend_config = backend_config,
157
+
)
158
+
```
159
+
160
+
---
137
161
138
-
> [!NOTE]
139
-
> Since Azure Machine Learning jobs always run in the context of environments, the parameter `env_manager` is ignored.
162
+
> [!NOTE]
163
+
> Since Azure Machine Learning jobs always run in the context of environments, the parameter `env_manager` is ignored.
140
164
141
-
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
165
+
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
0 commit comments