You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-compute-target.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,7 +42,7 @@ Learn [where and how to deploy your model to a compute target](how-to-deploy-and
42
42
<aname="amlcompute"></a>
43
43
## Azure Machine Learning compute (managed)
44
44
45
-
A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters and [compute instances](concept-compute-instance.md) are the only managed computes. Additional managed compute resources may be added in the future.
45
+
A managed compute resource is created and managed by Azure Machine Learning. This compute is optimized for machine learning workloads. Azure Machine Learning compute clusters and [compute instances](concept-compute-instance.md) are the only managed computes.
46
46
47
47
You can create Azure Machine Learning compute instances or compute clusters from:
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-configure-environment.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -151,7 +151,7 @@ When you're using a local computer (which might also be a remote virtual machine
151
151
152
152
This example creates an environment using python 3.7.7, but any specific subversions can be chosen. SDK compatibility may not be guaranteed with certain major versions (3.5+ is recommended), and it's recommended to try a different version/subversion in your Anaconda environment if you run into errors. It will take several minutes to create the environment while components and packages are downloaded.
153
153
154
-
1. Run the following commands in your new environment to enable environment-specific IPython kernels. This will ensure expected kernel and package import behavior when working with Jupyter Notebooks within Anaconda environments:
154
+
1. Run the following commands in your new environment to enable environment-specific I Python kernels. This will ensure expected kernel and package import behavior when working with Jupyter Notebooks within Anaconda environments:
155
155
156
156
```bash
157
157
conda install notebook ipykernel
@@ -301,10 +301,10 @@ Once the cluster is running, [create a library](https://docs.databricks.com/user
@@ -69,7 +69,7 @@ Follow the previous steps to view the list of compute targets. Then use these st
69
69
70
70
1. View the status of the create operation by selecting the compute target from the list:
71
71
72
-
:::image type="content" source="media/how-to-create-attach-studio/view-list.png" alt-text="View compute status from a list":::
72
+
:::image type="content" source="media/how-to-create-attach-studio/view-list.png" alt-text="View compute status from a list":::
73
73
74
74
75
75
### Compute instance
@@ -96,7 +96,7 @@ Create a single or multi node compute cluster for your training, batch inferenci
96
96
|---------|---------|
97
97
|Compute name | <li>Name is required and must be between 3 to 24 characters long.</li><li>Valid characters are upper and lower case letters, digits, and the **-** character.</li><li>Name must start with a letter</li><li>Name needs to be unique across all existing computes within an Azure region. You will see an alert if the name you choose is not unique</li><li>If **-** character is used, then it needs to be followed by at least one letter later in the name</li> |
98
98
|Virtual machine type | Choose CPU or GPU. This type cannot be changed after creation |
99
-
|Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be pre-empted.
99
+
|Virtual machine priority | Choose **Dedicated** or **Low priority**. Low priority virtual machines are cheaper but don't guarantee the compute nodes. Your job may be preempted.
100
100
|Virtual machine size | Supported virtual machine sizes might be restricted in your region. Check the [availability list](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines)|
101
101
|Minimum number of nodes | Minimum number of nodes that you want to provision. If you want a dedicated number of nodes, set that count here. Save money by setting the minimum to 0, so you won't pay for any nodes when the cluster is idle. |
102
102
|Maximum number of nodes | Maximum number of nodes that you want to provision. The compute will autoscale to a maximum of this node count when a job is submitted. |
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-create-your-first-pipeline.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -108,7 +108,7 @@ output_data1 = PipelineData(
108
108
## Set up a compute target
109
109
110
110
111
-
In Azure Machine Learning, the term __compute__ (or __compute target__) refers to the machines or clusters that perform the computational steps in your machine learning pipeline. See [compute targets for model training](concept-compute-target.md#train)for a full list of compute targets and [Create compute targets](how-to-create-attach-compute-sdk.md) for how to create and attach them to your workspace. The process for creating and or attaching a compute target is the same whether you are training a model or running a pipeline step. After you create and attach your compute target, use the `ComputeTarget` object in your [pipeline step](#steps).
111
+
In Azure Machine Learning, the term __compute__ (or __compute target__) refers to the machines or clusters that perform the computational steps in your machine learning pipeline. See [compute targets for model training](concept-compute-target.md#train)for a full list of compute targets and [Create compute targets](how-to-create-attach-compute-sdk.md) for how to create and attach them to your workspace. The process for creating and or attaching a compute target is the same whether you are training a model or running a pipeline step. After you create and attach your compute target, use the `ComputeTarget` object in your [pipeline step](#steps).
112
112
113
113
> [!IMPORTANT]
114
114
> Performing management operations on compute targets is not supported from inside remote jobs. Since machine learning pipelines are submitted as a remote job, do not use management operations on compute targets from inside the pipeline.
0 commit comments