Skip to content

Commit 564bf94

Browse files
authored
Update how-to-train-mlflow-projects.md
1 parent dea1996 commit 564bf94

File tree

1 file changed

+80
-56
lines changed

1 file changed

+80
-56
lines changed

articles/machine-learning/how-to-train-mlflow-projects.md

Lines changed: 80 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ ms.topic: conceptual
1313
ms.custom: how-to, devx-track-python, sdkv2, event-tier1-build-2022
1414
---
1515

16-
# Train ML models with MLflow Projects and Azure Machine Learning (Preview)
16+
# Train with MLflow Projects in Azure Machine Learning (Preview)
1717

1818
In this article, learn how to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) that uses Azure Machine Learning workspaces for tracking. You can submit jobs and only track them with Azure Machine Learning or migrate your runs to the cloud to run completely on [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
1919

@@ -59,86 +59,110 @@ This example shows how to submit MLflow projects and track them Azure Machine Le
5959

6060
1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
6161

62-
__conda.yaml__
63-
64-
:::code language="yaml" source="~/MachineLearningNotebooks-master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/conda.yaml" highlight="13-13":::
65-
66-
1. Submit the local run and ensure you set the parameter `backend = "azureml" `. With this setting, you can submit runs locally and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
67-
68-
69-
# [MLflow CLI](#tab/cli)
70-
71-
```bash
72-
mlflow run . --experiment-name --backend azureml --env-manager=local -P alpha=0.3
73-
```
62+
__conda.yaml__
63+
64+
```yaml
65+
name: mlflow-example
66+
channels:
67+
- defaults
68+
dependencies:
69+
- numpy>=1.14.3
70+
- pandas>=1.0.0
71+
- scikit-learn
72+
- pip:
73+
- mlflow
74+
- azureml-mlflow
75+
```
76+
77+
1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
7478

75-
# [Python](#tab/sdk)
79+
# [MLflow CLI](#tab/cli)
80+
81+
```bash
82+
mlflow run . --experiment-name --backend azureml --env-manager=local -P alpha=0.3
83+
```
7684

77-
```python
78-
local_env_run = mlflow.projects.run(uri=".",
79-
parameters={"alpha":0.3},
80-
backend = "azureml",
81-
env_manager="local",
82-
backend_config = backend_config,
83-
)
84-
85-
```
86-
---
85+
# [Python](#tab/sdk)
86+
87+
```python
88+
local_env_run = mlflow.projects.run(
89+
uri=".",
90+
parameters={"alpha":0.3},
91+
backend = "azureml",
92+
env_manager="local",
93+
backend_config = backend_config,
94+
)
95+
```
96+
97+
---
8798

88-
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
99+
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
89100

90101
## Train MLflow projects in Azure Machine Learning workspaces
91102

92103
This example shows how to submit MLflow projects on a remote compute with Azure Machine Learning tracking.
93104

94105
1. Create the backend configuration object, in this case we are going to indicate `COMPUTE`. This parameter references the name of your remote compute cluster you want to use for running your project. If `COMPUTE` is present, the project will be automatically submitted as an Azure Machine Learning job to the indicated compute.
95106

96-
# [MLflow CLI](#tab/cli)
107+
# [MLflow CLI](#tab/cli)
97108

98-
__backend_config.json__
109+
__backend_config.json__
99110

100-
```json
101-
{
102-
"COMPUTE": "cpu-cluster"
103-
}
104-
```
111+
```json
112+
{
113+
"COMPUTE": "cpu-cluster"
114+
}
115+
116+
```
105117

106-
# [Python](#tab/sdk)
118+
# [Python](#tab/sdk)
107119

108-
```python
109-
backend_config = {"COMPUTE": "cpu-cluster"}
110-
```
120+
```python
121+
backend_config = {"COMPUTE": "cpu-cluster"}
122+
```
111123

112124
1. Add the `azureml-mlflow` package as a pip dependency to your environment configuration file in order to track metrics and key artifacts in your workspace.
113125

114-
__conda.yaml__
126+
__conda.yaml__
115127

116-
:::code language="yaml" source="~/MachineLearningNotebooks-master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/conda.yaml" highlight="13-13":::
128+
```yaml
129+
name: mlflow-example
130+
channels:
131+
- defaults
132+
dependencies:
133+
- numpy>=1.14.3
134+
- pandas>=1.0.0
135+
- scikit-learn
136+
- pip:
137+
- mlflow
138+
- azureml-mlflow
139+
```
117140

118-
1. Submit the MLflow project and ensure you set the parameter `backend = "azureml"`. With this setting, you can submit your run to your remote compute and get the added support of automatic output tracking, log files, snapshots, and printed errors in your workspace.
141+
1. Submit the local run and ensure you set the parameter `backend = "azureml"`, which adds support of automatic tracking, model's capture, log files, snapshots, and printed errors in your workspace. In this example we assume the MLflow project you are trying to run is in the same folder you currently are, `uri="."`.
119142

120-
# [MLflow CLI](#tab/cli)
143+
# [MLflow CLI](#tab/cli)
121144

122-
```bash
123-
mlflow run . --backend azureml --backend-config backend_config.json -P alpha=0.3
124-
```
145+
```bash
146+
mlflow run . --backend azureml --backend-config backend_config.json -P alpha=0.3
147+
```
125148

126-
# [Python](#tab/sdk)
149+
# [Python](#tab/sdk)
127150

128-
```python
129-
local_env_run = mlflow.projects.run(uri=".",
130-
parameters={"alpha":0.3},
131-
backend = "azureml",
132-
backend_config = backend_config,
133-
)
134-
135-
```
136-
---
151+
```python
152+
local_env_run = mlflow.projects.run(
153+
uri=".",
154+
parameters={"alpha":0.3},
155+
backend = "azureml",
156+
backend_config = backend_config,
157+
)
158+
```
159+
160+
---
137161

138-
> [!NOTE]
139-
> Since Azure Machine Learning jobs always run in the context of environments, the parameter `env_manager` is ignored.
162+
> [!NOTE]
163+
> Since Azure Machine Learning jobs always run in the context of environments, the parameter `env_manager` is ignored.
140164

141-
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
165+
View your runs and metrics in the [Azure Machine Learning studio](https://ml.azure.com).
142166

143167

144168
## Clean up resources

0 commit comments

Comments
 (0)