You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/data-science-virtual-machine/dsvm-pools.md
+3-17Lines changed: 3 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,6 @@ author: vijetajo
11
11
ms.author: vijetaj
12
12
ms.topic: conceptual
13
13
ms.date: 12/10/2018
14
-
15
14
---
16
15
17
16
# Create a shared pool of Data Science Virtual Machines
@@ -32,11 +31,13 @@ You can find a sample Azure Resource Manager template that creates a scale set w
32
31
33
32
You can create the scale set from the Azure Resource Manager template by specifying values for the parameter file in the Azure CLI:
34
33
35
-
```
34
+
```azurecli-interactive
36
35
az group create --name [[NAME OF RESOURCE GROUP]] --location [[ Data center. For eg: "West US 2"]
37
36
az group deployment create --resource-group [[NAME OF RESOURCE GROUP ABOVE]] --template-uri https://raw.githubusercontent.com/Azure/DataScienceVM/master/Scripts/CreateDSVM/Ubuntu/dsvm-vmss-cluster.json --parameters @[[PARAMETER JSON FILE]]
38
37
```
38
+
39
39
The preceding commands assume you have:
40
+
40
41
* A copy of the parameter file with the values specified for your instance of the scale set.
41
42
* The number of VM instances.
42
43
* Pointers to the Azure Files share.
@@ -54,18 +55,3 @@ Virtual machine scale sets support autoscaling. You can set rules about when to
54
55
55
56
*[Set up a common Identity](dsvm-common-identity.md)
56
57
*[Securely store credentials to access cloud resources](dsvm-secure-access-keys.md)
Copy file name to clipboardExpand all lines: articles/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,10 +23,9 @@ One way to secure credentials is to use Windows Installer (MSI) in combination w
23
23
24
24
The documentation about managed identities for Azure resources and Key Vault comprises a comprehensive resource for in-depth information on these services. The rest of this article walks through the basic use of MSI and Key Vault on the Data Science Virtual Machine (DSVM) to access Azure resources.
25
25
26
-
## Create a managed identity on the DSVM
26
+
## Create a managed identity on the DSVM
27
27
28
-
29
-
```
28
+
```azurecli-interactive
30
29
# Prerequisite: You have already created a Data Science VM in the usual way.
31
30
32
31
# Create an identity principal for the VM.
@@ -35,9 +34,9 @@ az vm assign-identity -g <Resource Group Name> -n <Name of the VM>
35
34
az resource list -n <Name of the VM> --query [*].identity.principalId --out tsv
36
35
```
37
36
38
-
39
37
## Assign Key Vault access permissions to a VM principal
40
-
```
38
+
39
+
```azurecli-interactive
41
40
# Prerequisite: You have already created an empty Key Vault resource on Azure by using the Azure portal or Azure CLI.
42
41
43
42
# Assign only get and set permissions but not the capability to list the keys.
@@ -46,7 +45,7 @@ az keyvault set-policy --object-id <Principal ID of the DSVM from previous step>
# Prerequisite: You have granted your VMs MSI access to use storage account access keys based on instructions at https://docs.microsoft.com/azure/active-directory/managed-service-identity/tutorial-linux-vm-access-storage. This article describes the process in more detail.
# Now you can access the data in the storage account from the retrieved storage account keys.
68
67
```
68
+
69
69
## Access the key vault from Python
70
70
71
71
```python
@@ -97,7 +97,7 @@ print("My secret value is {}".format(secret.value))
97
97
98
98
## Access the key vault from Azure CLI
99
99
100
-
```
100
+
```azurecli-interactive
101
101
# With managed identities for Azure resources set up on the DSVM, users on the DSVM can use Azure CLI to perform the authorized functions. The following commands enable access to the key vault from Azure CLI without requiring login to an Azure account.
102
102
# Prerequisites: MSI is already set up on the DSVM as indicated earlier. Specific permissions, like accessing storage account keys, reading specific secrets, and writing new secrets, are provided to the MSI.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-configure-environment.md
+15-20Lines changed: 15 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,6 @@ The following table shows each development environment covered in this article,
27
27
|[Azure Databricks](#aml-databricks)| Ideal for running large-scale intensive machine learning workflows on the scalable Apache Spark platform. | Overkill for experimental machine learning, or smaller-scale experiments and workflows. Additional cost incurred for Azure Databricks. See [pricing details](https://azure.microsoft.com/pricing/details/databricks/). |
28
28
|[The Data Science Virtual Machine (DSVM)](#dsvm)| Similar to the cloud-based compute instance (Python and the SDK are pre-installed), but with additional popular data science and machine learning tools pre-installed. Easy to scale and combine with other custom tools and workflows. | A slower getting started experience compared to the cloud-based compute instance. |
29
29
30
-
31
30
This article also provides additional usage tips for the following tools:
32
31
33
32
*[Jupyter Notebooks](#jupyter): If you're already using the Jupyter Notebook, the SDK has some extras that you should install.
There is nothing to install or configure for a compute instance. Create one anytime from within your Azure Machine Learning workspace. Provide just a name and specify an Azure VM type. Try it now with this [Tutorial: Setup environment and workspace](tutorial-1st-experiment-sdk-setup.md).
57
56
58
-
59
57
Learn more about [compute instances](concept-compute-instance.md).
60
58
61
59
To stop incurring compute charges, [stop the compute instance](tutorial-1st-experiment-sdk-train.md#clean-up-resources).
@@ -91,7 +89,7 @@ To use the DSVM as a development environment:
91
89
92
90
* To create an Ubuntu Data Science Virtual Machine, use the following command:
93
91
94
-
```azurecli
92
+
```azurecli-interactive
95
93
# create a Ubuntu DSVM in your resource group
96
94
# note you need to be at least a contributor to the resource group in order to execute this command successfully
97
95
# If you need to create a new resource group use: "az group create --name YOUR-RESOURCE-GROUP-NAME --location YOUR-REGION (For example: westus2)"
@@ -100,7 +98,7 @@ To use the DSVM as a development environment:
100
98
101
99
* To create a Windows Data Science Virtual Machine, use the following command:
102
100
103
-
```azurecli
101
+
```azurecli-interactive
104
102
# create a Windows Server 2016 DSVM in your resource group
105
103
# note you need to be at least a contributor to the resource group in order to execute this command successfully
106
104
az vm create --resource-group YOUR-RESOURCE-GROUP-NAME --name YOUR-VM-NAME --image microsoft-dsvm:dsvm-windows:server-2016:latest --admin-username YOUR-USERNAME --admin-password YOUR-PASSWORD --authentication-type password
@@ -110,13 +108,13 @@ To use the DSVM as a development environment:
110
108
111
109
* For Ubuntu DSVM:
112
110
113
-
```shell
111
+
```bash
114
112
conda activate py36
115
113
```
116
114
117
115
* For Windows DSVM:
118
116
119
-
```shell
117
+
```bash
120
118
conda activate AzureML
121
119
```
122
120
@@ -141,35 +139,35 @@ When you're using a local computer (which might also be a remote virtual machine
141
139
142
140
Run the following command to create the environment.
143
141
144
-
```shell
142
+
```bash
145
143
conda create -n myenv python=3.6.5
146
144
```
147
145
148
146
Then activate the environment.
149
147
150
-
```shell
148
+
```bash
151
149
conda activate myenv
152
150
```
153
151
154
152
This example creates an environment using python 3.6.5, but any specific subversions can be chosen. SDK compatibility may not be guaranteed with certain major versions (3.5+ is recommended), and it's recommended to try a different version/subversion in your Anaconda environment if you run into errors. It will take several minutes to create the environment while components and packages are downloaded.
155
153
156
154
1. Run the following commands in your new environment to enable environment-specific IPython kernels. This will ensure expected kernel and package import behavior when working with Jupyter Notebooks within Anaconda environments:
157
155
158
-
```shell
156
+
```bash
159
157
conda install notebook ipykernel
160
158
```
161
159
162
160
Then run the following command to create the kernel:
1. Use the following commands to install packages:
169
167
170
168
This command installs the base Azure Machine Learning SDK with notebook and `automl` extras. The `automl` extra is a large install, and can be removed from the brackets if you don't intend to run automated machine learning experiments. The `automl` extra also includes the Azure Machine Learning Data Prep SDK by default as a dependency.
171
169
172
-
```shell
170
+
```bash
173
171
pip install azureml-sdk[notebooks,automl]
174
172
```
175
173
@@ -182,20 +180,19 @@ When you're using a local computer (which might also be a remote virtual machine
It will take several minutes to install the SDK. For more information on installation options, see the [install guide](https://docs.microsoft.com/python/api/overview/azure/ml/install?view=azure-ml-py).
187
184
188
185
1. Install other packages for your machine learning experimentation.
189
186
190
187
Use either of the following commands and replace *\<new package>* with the package you want to install. Installing packages via `conda install` requires that the package is part of the current channels (new channels can be added in Anaconda Cloud).
191
188
192
-
```shell
189
+
```bash
193
190
conda install <new package>
194
191
```
195
192
196
193
Alternatively, you can install packages via `pip`.
197
194
198
-
```shell
195
+
```bash
199
196
pip install <new package>
200
197
```
201
198
@@ -209,19 +206,19 @@ To enable these components in your Jupyter Notebook environment:
209
206
210
207
1. Open an Anaconda prompt and activate your environment.
211
208
212
-
```shell
209
+
```bash
213
210
conda activate myenv
214
211
```
215
212
216
213
1. Clone [the GitHub repository](https://aka.ms/aml-notebooks) for a set of sample notebooks.
1. Launch the Jupyter Notebook server with the following command:
223
220
224
-
```shell
221
+
```bash
225
222
jupyter notebook
226
223
```
227
224
@@ -241,7 +238,6 @@ To enable these components in your Jupyter Notebook environment:
241
238
242
239
1. To configure the Jupyter Notebook to use your Azure Machine Learning workspace, go to the [Create a workspace configuration file](#workspace) section.
243
240
244
-
245
241
### <a id="vscode"></a>Visual Studio Code
246
242
247
243
Visual Studio Code is a very popular cross platform code editor that supports an extensive set of programming languages and tools through extensions available in the [Visual Studio marketplace](https://marketplace.visualstudio.com/vscode). The [Azure Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-ai) installs the [Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) for coding in all types of Python environments (virtual, Anaconda, etc.). In addition, it provides convenience features for working with Azure Machine Learning resources and running Azure Machine Learning experiments all without leaving Visual Studio Code.
@@ -324,7 +320,7 @@ Once the cluster is running, [create a library](https://docs.databricks.com/user
324
320
+ In AutoML config, when using Azure Databricks add the following parameters:
325
321
1. ```max_concurrent_iterations``` is based on number of worker nodes in your cluster.
326
322
2. ```spark_context=sc``` is based on the default spark context.
327
-
+ Or, if you have an old SDK version, deselect it from cluster’s installed libs and move to trash. Install the new SDK version and restart the cluster. If there is an issue after the restart, detach and reattach your cluster.
323
+
+ Or, if you have an old SDK version, deselect it from cluster's installed libs and move to trash. Install the new SDK version and restart the cluster. If there is an issue after the restart, detach and reattach your cluster.
328
324
329
325
If install was successful, the imported library should look like one of these:
330
326
@@ -388,7 +384,6 @@ You can create the configuration file in three ways:
388
384
389
385
This code writes the configuration file to the *.azureml/config.json*file.
390
386
391
-
392
387
## Next steps
393
388
394
389
- [Train a model](tutorial-train-models-with-aml.md) on Azure Machine Learning with the MNIST dataset
This article provides an introduction to field-programmable gate arrays (FPGA), and shows you how to deploy your models using Azure Machine Learning to an Azure FPGA.
20
+
This article provides an introduction to field-programmable gate arrays (FPGA), and shows you how to deploy your models using Azure Machine Learning to an Azure FPGA.
21
21
22
22
FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects. The interconnects allow these blocks to be configured in various ways after manufacturing. Compared to other chips, FPGAs provide a combination of programmability and performance.
23
23
@@ -48,7 +48,7 @@ FPGAs on Azure supports:
48
48
49
49
+ Image classification and recognition scenarios
50
50
+ TensorFlow deployment
51
-
+ Intel FPGA hardware
51
+
+ Intel FPGA hardware
52
52
53
53
These DNN models are currently available:
54
54
- ResNet 50
@@ -77,20 +77,17 @@ The following scenarios use FPGAs:
You can deploy a model as a web service on FPGAs with Azure Machine Learning Hardware Accelerated Models. Using FPGAs provides ultra-low latency inference, even with a single batch size. Inference, or model scoring, is the phase where the deployed model is used for prediction, most commonly on production data.
85
83
86
-
87
84
### Prerequisites
88
85
89
86
- An Azure subscription. If you do not have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree) today.
90
87
91
88
- FPGA quota. Use the Azure CLI to check whether you have quota:
92
89
93
-
```shell
90
+
```azurecli-interactive
94
91
az vm list-usage --location "eastus" -o table --query "[?localName=='Standard PBS Family vCPUs']"
95
92
```
96
93
@@ -113,7 +110,7 @@ You can deploy a model as a web service on FPGAs with Azure Machine Learning Har
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-manage-workspace-cli.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ In this article, you learn how to create an Azure Machine Learning workspace usi
32
32
33
33
There are several ways that you can authenticate to your Azure subscription from the CLI. The most basic is to interactively authenticate using a browser. To authenticate interactively, open a command line or terminal and use the following command:
34
34
35
-
```azurecli
35
+
```azurecli-interactive
36
36
az login
37
37
```
38
38
@@ -146,13 +146,13 @@ To create a workspace that uses existing resources, you must provide the ID for
146
146
147
147
1. Install the application insights extension:
148
148
149
-
```bash
149
+
```azurecli-interactive
150
150
az extension add -n application-insights
151
151
```
152
152
153
153
2. Get the ID of your application insight service:
154
154
155
-
```bash
155
+
```azurecli-interactive
156
156
az monitor app-insights component show --app <application-insight-name> -g <resource-group-name> --query "id"
Copy file name to clipboardExpand all lines: articles/machine-learning/resource-known-issues.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -123,7 +123,7 @@ You will not be able to deploy models on FPGAs until you have requested and been
123
123
## Automated machine learning
124
124
125
125
Tensor Flow
126
-
Automated machine learning does not currently support tensor flow version 1.13. Installing this version will cause package dependencies to stop working. We are working to fix this issue in a future release.
126
+
Automated machine learning does not currently support tensor flow version 1.13. Installing this version will cause package dependencies to stop working. We are working to fix this issue in a future release.
127
127
128
128
### Experiment Charts
129
129
@@ -145,7 +145,7 @@ script_params = {
145
145
```
146
146
147
147
If you don't include the leading forward slash, '/', you'll need to prefix the working directory e.g.
148
-
`/mnt/batch/.../tmp/dataset` on the compute target to indicate where you want the dataset to be mounted.
148
+
`/mnt/batch/.../tmp/dataset` on the compute target to indicate where you want the dataset to be mounted.
149
149
150
150
### Fail to read Parquet file from HTTP or ADLS Gen 2
151
151
@@ -205,14 +205,14 @@ If you see this error when you use automated machine learning, run the two follo
205
205
206
206
If you see this error when you use automated machine learning:
207
207
208
-
1. Run this command to install two packages in your Azure Databricks cluster:
208
+
1. Run this command to install two packages in your Azure Databricks cluster:
209
209
210
-
```
210
+
```bash
211
211
scikit-learn==0.19.1
212
212
pandas==0.22.0
213
213
```
214
214
215
-
1. Detach and then reattach the cluster to your notebook.
215
+
1. Detach and then reattach the cluster to your notebook.
216
216
217
217
If these steps don't solve the issue, try restarting the cluster.
218
218
@@ -265,11 +265,11 @@ If you receive an error `Unable to upload project files to working directory in
265
265
266
266
If you are using file share for other workloads, such as data transfer, the recommendation is to use blobs so that file share is free to be used for submitting runs. You may also split the workload between two different workspaces.
267
267
268
-
## Webservices in Azure Kubernetes Service failures
268
+
## Webservices in Azure Kubernetes Service failures
269
269
270
270
Many webservice failures in Azure Kubernetes Service can be debugged by connecting to the cluster using `kubectl`. You can get the `kubeconfig.json` for an Azure Kubernetes Service Cluster by running
271
271
272
-
```bash
272
+
```azurecli-interactive
273
273
az aks get-credentials -g <rg> -n <aks cluster name>
274
274
```
275
275
@@ -313,7 +313,7 @@ Known issues with labeling projects.
313
313
314
314
### Only datasets created on blob datastores can be used
315
315
316
-
This is a known limitation of the current release.
316
+
This is a known limitation of the current release.
317
317
318
318
### After creation, the project shows "Initializing" for a long time
0 commit comments