Skip to content

Commit 05e8fef

Browse files
authored
Merge pull request #203275 from Blackmist/remove-aml
bulk update to change terms where possible
2 parents ee86571 + 9106003 commit 05e8fef

21 files changed

+58
-58
lines changed

articles/machine-learning/concept-workspace.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -96,9 +96,9 @@ There are multiple ways to create a workspace:
9696
9797
## <a name="sub-resources"></a> Sub resources
9898

99-
These sub resources are the main resources that are made in the AML workspace.
99+
These sub resources are the main resources that are made in the AzureML workspace.
100100

101-
* VMs: provide computing power for your AML workspace and are an integral part in deploying and training models.
101+
* VMs: provide computing power for your AzureML workspace and are an integral part in deploying and training models.
102102
* Load Balancer: a network load balancer is created for each compute instance and compute cluster to manage traffic even while the compute instance/cluster is stopped.
103103
* Virtual Network: these help Azure resources communicate with one another, the internet, and other on-premises networks.
104104
* Bandwidth: encapsulates all outbound data transfers across regions.

articles/machine-learning/how-to-configure-auto-train.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,10 +65,10 @@ try:
6565
ml_client = MLClient.from_config(credential)
6666
except Exception as ex:
6767
print(ex)
68-
# Enter details of your AML workspace
68+
# Enter details of your AzureML workspace
6969
subscription_id = "<SUBSCRIPTION_ID>"
7070
resource_group = "<RESOURCE_GROUP>"
71-
workspace = "<AML_WORKSPACE_NAME>"
71+
workspace = "<AZUREML_WORKSPACE_NAME>"
7272
ml_client = MLClient(credential, subscription_id, resource_group, workspace)
7373

7474
```

articles/machine-learning/how-to-configure-databricks-automl-environment.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,7 @@ To use automated ML, skip to [Add the Azure ML SDK with AutoML](#add-the-azure-m
9292
![Azure Machine Learning SDK for Databricks](./media/how-to-configure-environment/amlsdk-withoutautoml.jpg)
9393

9494
## Add the Azure ML SDK with AutoML to Databricks
95-
If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AML SDK.
95+
If the cluster was created with Databricks Runtime 7.3 LTS (*not* ML), run the following command in the first cell of your notebook to install the AzureML SDK.
9696

9797
```
9898
%pip install --upgrade --force-reinstall -r https://aka.ms/automl_linux_requirements.txt

articles/machine-learning/how-to-create-attach-kubernetes.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,17 +29,17 @@ Azure Machine Learning can deploy trained machine learning models to Azure Kuber
2929

3030
## Limitations
3131

32-
- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AML workspace.
32+
- If you need a **Standard Load Balancer(SLB)** deployed in your cluster instead of a Basic Load Balancer(BLB), create a cluster in the AKS portal/CLI/SDK and then **attach** it to the AzureML workspace.
3333

3434
- If you have an Azure Policy that restricts the creation of Public IP addresses, then AKS cluster creation will fail. AKS requires a Public IP for [egress traffic](../aks/limit-egress-traffic.md). The egress traffic article also provides guidance to lock down egress traffic from the cluster through the Public IP, except for a few fully qualified domain names. There are 2 ways to enable a Public IP:
3535
- The cluster can use the Public IP created by default with the BLB or SLB, Or
3636
- The cluster can be created without a Public IP and then a Public IP is configured with a firewall with a user defined route. For more information, see [Customize cluster egress with a user-defined-route](../aks/egress-outboundtype.md).
3737

38-
The AML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
38+
The AzureML control plane does not talk to this Public IP. It talks to the AKS control plane for deployments.
3939

4040
- To attach an AKS cluster, the service principal/user performing the operation must be assigned the __Owner or contributor__ Azure role-based access control (Azure RBAC) role on the Azure resource group that contains the cluster. The service principal/user must also be assigned [Azure Kubernetes Service Cluster Admin Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) on the cluster.
4141

42-
- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AML control plane IP ranges for the AKS cluster. The AML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
42+
- If you **attach** an AKS cluster, which has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the AzureML control plane IP ranges for the AKS cluster. The AzureML control plane is deployed across paired regions and deploys inference pods on the AKS cluster. Without access to the API server, the inference pods cannot be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster.
4343

4444
Authorized IP ranges only works with Standard Load Balancer.
4545

articles/machine-learning/how-to-create-component-pipeline-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ The `train.py` file contains a normal python function, which performs the traini
157157

158158
#### Define component using python function
159159

160-
After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AML pipelines.
160+
After defining the training function successfully, you can use @command_component in Azure Machine Learning SDK v2 to wrap your function as a component, which can be used in AzureML pipelines.
161161

162162
:::code language="python" source="~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/train/train_component.py":::
163163

articles/machine-learning/how-to-data-ingest-adf.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ The following Python code demonstrates how to create a datastore that connects t
117117

118118
```python
119119
ws = Workspace.from_config()
120-
adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AML
120+
adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AzureML
121121

122122
subscription_id=os.getenv("ADL_SUBSCRIPTION", "<ADLS account subscription ID>") # subscription id of ADLS account
123123
resource_group=os.getenv("ADL_RESOURCE_GROUP", "<ADLS account resource group>") # resource group of ADLS account
@@ -147,7 +147,7 @@ from azureml.core import Workspace, Datastore, Dataset
147147
from azureml.core.experiment import Experiment
148148
from azureml.train.automl import AutoMLConfig
149149

150-
# retrieve data via AML datastore
150+
# retrieve data via AzureML datastore
151151
datastore = Datastore.get(ws, adlsgen2_datastore)
152152
datastore_path = [(datastore, '/data/prepared-data.csv')]
153153

articles/machine-learning/how-to-debug-visual-studio-code.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ To enable debugging, make the following changes to the Python script(s) used by
138138
parser.add_argument('--remote_debug', action='store_true')
139139
parser.add_argument('--remote_debug_connection_timeout', type=int,
140140
default=300,
141-
help=f'Defines how much time the AML compute target '
141+
help=f'Defines how much time the AzureML compute target '
142142
f'will await a connection from a debugger client (VSCODE).')
143143
parser.add_argument('--remote_debug_client_ip', type=str,
144144
help=f'Defines IP Address of VS Code client')
@@ -195,7 +195,7 @@ parser.add_argument("--output_train", type=str, help="output_train directory")
195195
parser.add_argument('--remote_debug', action='store_true')
196196
parser.add_argument('--remote_debug_connection_timeout', type=int,
197197
default=300,
198-
help=f'Defines how much time the AML compute target '
198+
help=f'Defines how much time the AzureML compute target '
199199
f'will await a connection from a debugger client (VSCODE).')
200200
parser.add_argument('--remote_debug_client_ip', type=str,
201201
help=f'Defines IP Address of VS Code client')

articles/machine-learning/how-to-deploy-managed-online-endpoint-sdk-v2.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,10 +68,10 @@ The [workspace](concept-workspace.md) is the top-level resource for Azure Machin
6868
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
6969

7070
```python
71-
# enter details of your AML workspace
71+
# enter details of your AzureML workspace
7272
subscription_id = "<SUBSCRIPTION_ID>"
7373
resource_group = "<RESOURCE_GROUP>"
74-
workspace = "<AML_WORKSPACE_NAME>"
74+
workspace = "<AZUREML_WORKSPACE_NAME>"
7575
```
7676

7777
```python

articles/machine-learning/how-to-deploy-model-cognitive-search.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ from azureml.core.webservice import AksWebservice, Webservice
223223

224224
# If deploying to a cluster configured for dev/test, ensure that it was created with enough
225225
# cores and memory to handle this deployment configuration. Note that memory is also used by
226-
# things such as dependencies and AML components.
226+
# things such as dependencies and AzureML components.
227227

228228
aks_config = AksWebservice.deploy_configuration(autoscale_enabled=True,
229229
autoscale_min_replicas=1,

articles/machine-learning/how-to-generate-automl-training-code.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -440,7 +440,7 @@ run = experiment.submit(config=src)
440440

441441
Once you have a trained model, you can save/serialize it to a `.pkl` file with `pickle.dump()` and `pickle.load()`. You can also use `joblib.dump()` and `joblib.load()`.
442442

443-
The following example is how you download and load a model in-memory that was trained in AML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
443+
The following example is how you download and load a model in-memory that was trained in AzureML compute with `ScriptRunConfig`. This code can run in the same notebook you used the Azure ML SDK `ScriptRunConfig`.
444444

445445
```python
446446
import joblib

0 commit comments

Comments
 (0)