You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-arc/data/create-data-controller-indirect-cli.md
+11-12Lines changed: 11 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: Create data controller using CLI
3
-
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster which you already have created, using the CLI.
3
+
description: Create an Azure Arc data controller, on a typical multi-node Kubernetes cluster that you already have created, using the CLI.
4
4
services: azure-arc
5
5
ms.service: azure-arc
6
6
ms.subservice: azure-arc-data
@@ -20,12 +20,11 @@ Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure
20
20
21
21
### Install tools
22
22
23
-
To create the data controller using the CLI, you will need to install the `arcdata` extension for Azure (az) CLI.
23
+
Before you begin, install the `arcdata` extension for Azure (az) CLI.
24
24
25
25
[Install the [!INCLUDE [azure-data-cli-azdata](../../../includes/azure-data-cli-azdata.md)]](install-client-tools.md)
26
26
27
-
Regardless of which target platform you choose, you will need to set the following environment variables prior to the creation for the data controller. These environment variables will become the credentials used for accessing the metrics and logs dashboards after data controller creation.
28
-
27
+
Regardless of which target platform you choose, you need to set the following environment variables prior to the creation for the data controller. These environment variables become the credentials used for accessing the metrics and logs dashboards after data controller creation.
29
28
30
29
### Set environment variables
31
30
@@ -58,7 +57,7 @@ $ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
58
57
59
58
### Connect to Kubernetes cluster
60
59
61
-
You will need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
60
+
Connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the creation of the Azure Arc data controller. How you connect to a Kubernetes cluster or service varies. See the documentation for the Kubernetes distribution or service that you are using on how to connect to the Kubernetes API server.
62
61
63
62
You can check to see that you have a current Kubernetes connection and confirm your current context with the following commands.
64
63
@@ -86,7 +85,7 @@ The following sections provide instructions for specific types of Kubernetes pla
86
85
87
86
## Create on Azure Kubernetes Service (AKS)
88
87
89
-
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class will only work if you have VMs that were deployed using VM images that have premium disks.
88
+
By default, the AKS deployment profile uses the `managed-premium` storage class. The `managed-premium` storage class only works if you have VMs that were deployed using VM images that have premium disks.
90
89
91
90
If you are going to use `managed-premium` as your storage class, then you can run the following command to create the data controller. Substitute the placeholders in the command with your resource group name, subscription ID, and Azure location.
92
91
@@ -162,7 +161,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
162
161
163
162
### Determine storage class
164
163
165
-
You will also need to determine which storage class to use by running the following command.
164
+
To determine which storage class to use, run the following command.
166
165
167
166
```console
168
167
kubectl get storageclass
@@ -204,10 +203,10 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
204
203
Now you are ready to create the data controller using the following command.
205
204
206
205
> [!NOTE]
207
-
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
206
+
> The `--path` parameter should point to the _directory_ containing the control.json file not to the control.json file itself.
208
207
209
208
> [!NOTE]
210
-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
209
+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
211
210
212
211
```azurecli
213
212
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -222,7 +221,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
222
221
223
222
By default, the kubeadm deployment profile uses a storage class called `local-storage` and service type `NodePort`. If this is acceptable you can skip the instructions below that set the desired storage class and service type and immediately run the `az arcdata dc create` command below.
224
223
225
-
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command will create a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
224
+
If you want to customize your deployment profile to specify a specific storage class and/or service type, start by creating a new custom deployment profile file based on the kubeadm deployment profile by running the following command. This command creates a directory `custom` in your current working directory and a custom deployment profile file `control.json` in that directory.
226
225
227
226
```azurecli
228
227
az arcdata dc config init --source azure-arc-kubeadm --path ./custom
@@ -254,7 +253,7 @@ az arcdata dc config replace --path ./custom/control.json --json-values "$.spec.
254
253
Now you are ready to create the data controller using the following command.
255
254
256
255
> [!NOTE]
257
-
> When deploying to OpenShift Container Platform, you will need to specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
256
+
> When deploying to OpenShift Container Platform, specify the `--infrastructure` parameter value. Options are: `aws`, `azure`, `alibaba`, `gcp`, `onpremises`.
258
257
259
258
```azurecli
260
259
az arcdata dc create --path ./custom --k8s-namespace <namespace> --use-k8s --name arc --subscription <subscription id> --resource-group <resource group name> --location <location> --connectivity-mode indirect --infrastructure <infrastructure>
@@ -297,7 +296,7 @@ Once you have run the command, continue on to [Monitoring the creation status](#
297
296
298
297
## Monitor the creation status
299
298
300
-
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
299
+
It takes a few minutes to create the controller completely. You can monitor the progress in another terminal window with the following commands:
301
300
302
301
> [!NOTE]
303
302
> The example commands below assume that you created a data controller named `arc-dc` and Kubernetes namespace named `arc`. If you used different values update the script accordingly.
0 commit comments