Skip to content

Commit 2dc7002

Browse files
author
Mike Ray (Microsoft)
committed
Simpify some language elements (primarily tense)
1 parent fe456fd commit 2dc7002

File tree

1 file changed

+11
-12
lines changed

1 file changed

+11
-12
lines changed

articles/azure-arc/data/create-data-controller-indirect-cli.md

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Create data controller using CLI
3-
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
3+
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI.
44
services: azure-arc
55
ms.service: azure-arc
66
ms.subservice: azure-arc-data
@@ -20,12 +20,11 @@ Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure
2020

2121
### Install tools
2222

23-
To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
23+
Before you begin, install the `arcdata` extension for Azure (az) CLI.
2424

2525
[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
2626

27-
Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller. These environment variables will become the credentials used for accessing the metrics and logs dashboards after data controller creation.
28-
27+
Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation.
2928

3029
### Set environment variables
3130

@@ -58,7 +57,7 @@ $ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
5857

5958
### Connect to Kubernetes cluster
6059

61-
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
60+
Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
6261

6362
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
6463

@@ -86,7 +85,7 @@ The following sections provide instructions for specific types of Kubernetes pla
8685
8786
## Create on Azure Kubernetes Service (AKS)
8887

89-
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
88+
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks.
9089

9190
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
9291

@@ -162,7 +161,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
162161

163162
### Determine storage class
164163

165-
You will also need to determine which storage class to use by running the following command.
164+
To determine which storage class to use, run the following command.
166165

167166
```console
168167
kubectl get storageclass
@@ -204,10 +203,10 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
204203
Now you are ready to create the data controller using the following command.
205204

206205
> [!NOTE]
207-
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
206+
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
208207
209208
> [!NOTE]
210-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
209+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
211210
212211
```azurecli
213212
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -222,7 +221,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
222221

223222
By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
224223

225-
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
224+
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
226225

227226
```azurecli
228227
az arcdata dc config init --source azure-arc-kubeadm --path ./custom
@@ -254,7 +253,7 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
254253
Now you are ready to create the data controller using the following command.
255254

256255
> [!NOTE]
257-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
256+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
258257
259258
```azurecli
260259
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -297,7 +296,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
297296

298297
## Monitor the creation status
299298

300-
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
299+
It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands:
301300

302301
> [!NOTE]
303302
> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.

0 commit comments

Comments
 (0)