You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -88,7 +88,7 @@ The Linux VM is already provisioned with X2Go Server and is ready to accept clie
88
88
1. Download and install the X2Go client for your client platform from [X2Go](https://wiki.x2go.org/doku.php/doc:installation:x2goclient).
89
89
1. Note the public IP address of the virtual machine. In the Azure portal, open the virtual machine you created to find this information.
90
90
91
-

91
+
:::image type="content" source="./media/dsvm-ubuntu-intro/ubuntu-ip-address.png" alt-text="Screenshot showing the public IP address of the virtual machine." lightbox= "./media/dsvm-ubuntu-intro/ubuntu-ip-address.png":::
92
92
93
93
1. Run the X2Go client. If the "New Session" window doesn't automatically pop up, go to Session -> New Session.
94
94
@@ -101,7 +101,7 @@ The Linux VM is already provisioned with X2Go Server and is ready to accept clie
101
101
***Media tab**: You can turn off sound support and client printing if you don't need to use them.
102
102
***Shared folders**: Use this tab to add client machine directory that you would like to mount on the VM.
:::image type="content" source="./media/dsvm-ubuntu-intro/x2go-ubuntu.png" alt-text="Screenshot showing preferences for a new X2Go session." lightbox= "./media/dsvm-ubuntu-intro/x2go-ubuntu.png":::
105
105
1. Select **OK**.
106
106
1. Select on the box in the right pane of the X2Go window to bring up the sign-in screen for your VM.
107
107
1. Enter the password for your VM.
@@ -113,18 +113,19 @@ The Linux VM is already provisioned with X2Go Server and is ready to accept clie
113
113
114
114
The Ubuntu DSVM runs [JupyterHub](https://github.com/jupyterhub/jupyterhub), a multiuser Jupyter server. To connect, take the following steps:
115
115
116
-
1. Note the public IP address for your VM. To find this value, search for and select your VM in the Azure portal.
117
-

116
+
1. Note the public IP address of your VM. To find this value, search for and select your VM in the Azure portal, as shown in this screenshot:
117
+
118
+
:::image type="content" source="./media/dsvm-ubuntu-intro/ubuntu-ip-address.png" alt-text="Screenshot highlighting the public IP address of your VM." lightbox= "./media/dsvm-ubuntu-intro/ubuntu-ip-address.png":::
118
119
119
120
1. From your local machine, open a web browser, and navigate to https:\//**your-vm-ip**:8000, replacing "**your-vm-ip**" with the IP address you noted earlier.
120
121
1. Your browser will probably prevent you from opening the page directly. It might tell you that there's a certificate error. The DSVM provides security with a self-signed certificate. Most browsers will allow you to select through after this warning. Many browsers will continue to provide some kind of visual warning about the certificate throughout your Web session.
121
122
122
123
>[!NOTE]
123
124
> If you see the `ERR_EMPTY_RESPONSE` error message in your browser, make sure you access the machine by explicit use of the *HTTPS* protocol. *HTTP* or just the web address don't work for this step. If you type the web address without `https://` in the address line, most browsers will default to `http`, and the error will appear.
124
125
125
-
1. Enter the username and password that you used to create the VM, and sign in.
126
+
1. Enter the username and password that you used to create the VM, and sign in, as shown in this screenshot
:::image type="content" source="./media/dsvm-ubuntu-intro/jupyter-login.png" alt-text="Screenshot the JupyterHub sign-in screen." lightbox= "./media/dsvm-ubuntu-intro/jupyter-login.png":::
128
129
129
130
>[!NOTE]
130
131
> If you receive a 500 Error at this stage, you probably used capitalized letters in your username. This is a known interaction between Jupyter Hub and the PAMAuthenticator it uses.
Copy file name to clipboardExpand all lines: articles/machine-learning/data-science-virtual-machine/how-to-track-experiments.md
+44-41Lines changed: 44 additions & 41 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,38 +8,39 @@ ms.custom: sdkv1
8
8
author: samkemp
9
9
ms.author: samkemp
10
10
ms.topic: conceptual
11
-
ms.date: 07/17/2020
11
+
ms.reviewer: franksolomon
12
+
ms.date: 04/23/2024
12
13
---
13
14
14
15
# Track experiments and deploy models in Azure Machine Learning
15
16
16
-
Enhance the model creation process by tracking your experiments and monitoring run metrics. In this article, learn how to add logging code to your training script using the [MLflow](https://mlflow.org/) API and track the experiment in Azure Machine Learning.
17
+
In this article, learn how to add logging code to your training script with the [MLflow](https://mlflow.org/) API and track the experiment in Azure Machine Learning. You can monitor run metrics, to enhance the model creation process.
17
18
18
-
The following diagram illustrates that with MLflow Tracking, you track an experiment's run metrics and store model artifacts in your Azure Machine Learning workspace.
19
+
This diagram shows that with MLflow Tracking, you track the run metrics of an experiment, and store model artifacts in your Azure Machine Learning workspace:
*You'll need to [provision an Azure Machine Learning Workspace](../how-to-manage-workspace.md#create-a-workspace)
25
+
*[Provision an Azure Machine Learning Workspace](../how-to-manage-workspace.md#create-a-workspace)
25
26
26
27
## Create a new notebook
27
28
28
-
The Azure Machine Learning and MLFlow SDK are preinstalled on the Data Science VM and can be accessed in the **azureml_py36_\*** conda environment. In JupyterLab, click on the launcher and select the following kernel:
29
+
The Azure Machine Learning and MLFlow SDK are preinstalled on the Data Science Virtual Machine (DSVM). You can access these resources in the **azureml_py36_\*** conda environment. In JupyterLab, select on the launcher and select this kernel:
:::image type="content" source="./media/how-to-track-experiments/experiment-tracking-1.png" alt-text="Screenshot showing selection of the azureml_py36_pytorch kernel." lightbox= "./media/how-to-track-experiments/experiment-tracking-1.png":::
31
32
32
33
## Set up the workspace
33
34
34
-
Go to the [Azure portal](https://portal.azure.com) and select the workspace you provisioned as part of the prerequisites. You'll see__Download config.json__(see below) - download the config and ensure It's stored in your working directory on the DSVM.
35
+
Go to the [Azure portal](https://portal.azure.com) and select the workspace you provisioned as part of the prerequisites. Note the__Download config.json__configuration file, as shown in the next image. Download this file, and store it in your working directory on the DSVM.
The tracking URI is valid up to an hour or less. If you restart your script after some idle time, use the get_mlflow_tracking_uri API to get a new URI.
54
+
>[!NOTE]
55
+
> The tracking URI is valid for up to one hour. If you restart your script after some idle time, use the get_mlflow_tracking_uri API to get a new URI.
55
56
56
57
### Load the data
57
58
58
-
This example uses the diabetes dataset, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets.
59
+
This example uses the diabetes dataset, a well-known small dataset included with scikit-learn. This cell loads the dataset and splits it into random training and testing sets.
Add experiment tracking using the Azure Machine Learning SDK, and upload a persisted model into the experiment run record. The following code adds logs, and uploads a model file to the experiment run. The model is also registered in the Azure Machine Learning model registry.
81
+
Add experiment tracking using the Azure Machine Learning SDK, and upload a persisted model into the experiment run record. This code sample adds logs, and uploads a model file to the experiment run. The model is also registered in the Azure Machine Learning model registry:
81
82
82
83
```python
83
84
# Get an experiment object from Azure Machine Learning
@@ -110,73 +111,75 @@ with mlflow.start_run():
110
111
111
112
### View runs in Azure Machine Learning
112
113
113
-
You can view the experiment run in [Azure Machine Learning Studio](https://ml.azure.com). Click on __Experiments__ in the left-hand menu and select the 'experiment_with_mlflow' (or if you decided to name your experiment differently in the above snippet, click on the name used):
114
+
You can view the experiment run in [Azure Machine Learning studio](https://ml.azure.com). Select __Experiments__ in the left-hand menu, and select the 'experiment_with_mlflow'. If you decided to name your experiment differently in the above snippet, select the name that you chose:
:::image type="content" source="./media/how-to-track-experiments/mlflow-experiments-2.png" alt-text="Screenshot showing the logged Mean Square Error of the experiment run." lightbox= "./media/how-to-track-experiments/mlflow-experiments-2.png":::
120
121
121
-
If you click on the run, You'll see other details and also the pickled model in the __Outputs+logs__
122
+
If you select the run, you can view other details, and the selected model, in the __Outputs+logs__.
122
123
123
124
## Deploy model in Azure Machine Learning
124
125
125
-
In this section, we outline how to deploy models trained on a DSVM to Azure Machine Learning.
126
+
This section describes how to deploy models, trained on a DSVM, to Azure Machine Learning.
126
127
127
128
### Step 1: Create Inference Compute
128
129
129
-
On the left-hand menu in [Azure Machine Learning Studio](https://ml.azure.com)click on __Compute__ and then the __Inference clusters__ tab. Next, click on __+ New__ as discussed below:
130
+
On the left-hand menu in [Azure Machine Learning studio](https://ml.azure.com)select __Compute__, as shown in this screenshot:
:::image type="content" source="./media/how-to-track-experiments/mlflow-experiments-7.png" alt-text="Screenshot showing selection of the Inference Clusters pane." lightbox= "./media/how-to-track-experiments/mlflow-experiments-7.png":::
147
+
148
+
Select __Create__.
146
149
147
150
### Step 2: Deploy no-code inference service
148
151
149
-
When we registered the model in our code using `register_model`, we specified the framework as sklearn. Azure Machine Learning supports no code deployments for the following frameworks:
152
+
When we registered the model in our code using `register_model`, we specified the framework as **sklearn**. Azure Machine Learning supports no code deployments for these frameworks:
150
153
151
154
* scikit-learn
152
155
* Tensorflow SaveModel format
153
156
* ONNX model format
154
157
155
-
No-code deployment means that you can deploy straight from the model artifact without needing to specify any specific scoring script.
158
+
No-code deployment means that you can deploy straight from the model artifact. You don't need to specify any specific scoring script.
156
159
157
-
To deploy the diabetes model, go to the left-hand menu in the [Azure Machine Learning Studio](https://ml.azure.com) and select __Models__. Next, click on the registered diabetes_model:
160
+
To deploy the diabetes model, go to the left-hand menu in the [Azure Machine Learning studio](https://ml.azure.com) and select __Models__. Next, select the registered diabetes_model:
:::image type="content" source="./media/how-to-track-experiments/mlflow-experiments-4.png" alt-text="Screenshot showing selection of the Deploy button." lightbox= "./media/how-to-track-experiments/mlflow-experiments-4.png":::
164
167
165
-
We will deploy the model to the Inference Cluster (Azure Kubernetes Service) we created in step 1. Fill the details below by providing a name for the service, and the name of the AKS compute cluster (created in step 1). We also recommend that you increase the __CPU reserve capacity__to 1 (from 0.1) and the __Memory reserve capacity__to 1 (from 0.5) - you can make this increase by clicking on __Advanced__ and filling in the details. Then click__Deploy__.
168
+
The model will deploy to the Inference Cluster (Azure Kubernetes Service) we created in step 1. Provide a name for the service, and the name of the AKS compute cluster (created in step 1), to fill in the details. We also recommend that you increase the __CPU reserve capacity__ from 0.1 to 1, and the __Memory reserve capacity__ from 0.5 to 1. Select __Advanced__ and fill in the details to set this increase. Then select__Deploy__, as shown in this screenshot:
:::image type="content" source="./media/how-to-track-experiments/mlflow-experiments-5.png" alt-text="Screenshot showing details of the model deployment." lightbox= "./media/how-to-track-experiments/mlflow-experiments-5.png":::
168
171
169
172
### Step 3: Consume
170
173
171
-
When the model has deployed successfully, you should see the following (to get to this page click on Endpoints from the left-hand menu > then click on the name of deployed service):
174
+
When the model successfully deploys, select Endpoints from the left-hand menu, then select the name of the deployed service. The model details pane should become visible, as shown in this screenshot:
:::image type="content" source="./media/how-to-track-experiments/mlflow-experiments-8.png" alt-text="Screenshot showing the model details pane." lightbox= "./media/how-to-track-experiments/mlflow-experiments-8.png":::
174
177
175
-
You should see that the deployment state goes from __transitioning__ to __healthy__. In addition, this details section provides the REST endpoint and Swagger URLs that an application developer can use to integrate your ML model into their apps.
178
+
The deployment state should change from __transitioning__ to __healthy__. Additionally, the details section provides the REST endpoint and Swagger URLs that application developers can use to integrate your ML model into their apps.
176
179
177
-
You can test the endpoint using[Postman](https://www.postman.com/), or you can use the Azure Machine Learning SDK:
180
+
You can test the endpoint with[Postman](https://www.postman.com/), or you can use the Azure Machine Learning SDK:
Delete the Inference Compute you created in Step 1 so that you don't incur ongoing compute charges. On the left-hand menu in the Azure Machine Learning Studio, click on Compute > Inference Clusters > Select the compute > Delete.
203
+
Delete the Inference Compute you created in Step 1, to avoid ongoing compute charges. To do this, on the left-hand menu in the Azure Machine Learning studio, select Compute > Inference Clusters > Select the specific Inference Compute Resource > Delete.
0 commit comments