You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-arc/data/automated-integration-testing.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -413,7 +413,8 @@ At a high-level, the launcher performs the following sequence of steps:
413
413
3. Perform CRD metadata scan to discover existing Arc and Arc Data Services Custom Resources
414
414
4. Clean up any existing Custom Resources in Kubernetes, and subsequent resources in Azure. If any mismatch between the credentials in `.test.env` compared to resources existing in the cluster, quit.
415
415
5. Generate a unique set of environment variables based on timestamp for Arc Cluster name, Data Controller and Custom Location/Namespace. Prints out the environment variables, obfuscating sensitive values (e.g. Service Principal Password etc.)
416
-
6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the Controller via the [unified experience](create-data-controller-direct-cli.md?tabs=linux#deploy---unified-experience)
416
+
6. a. For Direct Mode - Onboard the Cluster to Azure Arc, then deploys the controller.
417
+
417
418
b. For Indirect Mode: deploy the Data Controller
418
419
7. Once Data Controller is `Ready`, generate a set of Azure CLI ([`az arcdata dc debug`](/cli/azure/arcdata/dc/debug?view=azure-cli-latest&preserve-view=true)) logs and store locally, labeled as `setup-complete` - as a baseline.
419
420
8. Use the `TESTS_DIRECT/INDIRECT` environment variable from `.test.env` to launch a set of parallelized Sonobuoy test runs based on a space-separated array (`TESTS_(IN)DIRECT`). These runs execute in a new `sonobuoy` namespace, using `arc-sb-plugin` pod that contains the Pytest validation tests.
Copy file name to clipboardExpand all lines: articles/azure-arc/data/create-data-controller-direct-cli.md
+3-206Lines changed: 3 additions & 206 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,12 +30,7 @@ Creating an Azure Arc data controller in direct connectivity mode involves the f
30
30
1. Create a custom location.
31
31
1. Create the data controller.
32
32
33
-
You can create them individually or in a unified experience.
34
-
35
-
## Deploy - unified experience
36
-
37
-
In the unified experience, you can create the Arc data controller extension, custom location, and Arc data controller all in one command as follows:
38
-
33
+
Create the Arc data controller extension, custom location, and Arc data controller all in one command as follows:
39
34
40
35
##### [Linux](#tab/linux)
41
36
@@ -112,205 +107,6 @@ az arcdata dc create --name arc-dc1 --resource-group $ENV:resourceGroup --custom
112
107
113
108
---
114
109
115
-
## Deploy - individual experience
116
-
117
-
### Step 1: Create an Azure Arc-enabled data services extension
118
-
119
-
Use the k8s-extension CLI to create a data services extension.
120
-
121
-
#### Set environment variables
122
-
123
-
Set the following environment variables, which will be then used in later steps.
124
-
125
-
Following are two sets of environment variables. The first set of variables identifies your Azure subscription, resource group, cluster name, location, extension, and namespace. The second defines credentials to access the metrics and logs dashboards.
126
-
127
-
The environment variables include passwords for log and metric services. The passwords must be at least eight characters long and contain characters from three of the following four categories: Latin uppercase letters, Latin lowercase letters, numbers, and non-alphanumeric characters.
128
-
129
-
130
-
##### [Linux](#tab/linux)
131
-
132
-
```console
133
-
## variables for Azure subscription, resource group, cluster name, location, extension, and namespace.
134
-
export subscription=<Your subscription ID>
135
-
export resourceGroup=<Your resource group>
136
-
export clusterName=<name of your connected Kubernetes cluster>
137
-
export location=<Azure location>
138
-
export adsExtensionName="<extension name>"
139
-
export namespace="<namespace>"
140
-
## variables for logs and metrics dashboard credentials
141
-
export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard>
142
-
export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard>
143
-
export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard>
144
-
export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard>
145
-
```
146
-
147
-
##### [Windows (PowerShell)](#tab/windows)
148
-
149
-
```PowerShell
150
-
## variables for Azure location, extension and namespace
151
-
$ENV:subscription="<Your subscription ID>"
152
-
$ENV:resourceGroup="<Your resource group>"
153
-
$ENV:clusterName="<name of your connected Kubernetes cluster>"
154
-
$ENV:location="<Azure location>"
155
-
$ENV:adsExtensionName="<name of Data controller extension"
156
-
$ENV:namespace="<namespace where you will deploy the extension and data controller>"
157
-
## variables for Metrics and Monitoring dashboard credentials
158
-
$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>"
159
-
$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>"
160
-
$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>"
161
-
$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
162
-
```
163
-
164
-
---
165
-
166
-
#### Create the Arc data services extension
167
-
168
-
The following command creates the Arc data services extension.
> The Arc data services extension install can take a few minutes to complete.
203
-
204
-
#### Verify the Arc data services extension is created
205
-
206
-
You can verify the status of the deployment of Azure Arc-enabled data services extension. Use the Azure portal or Cube
207
-
208
-
##### Check status from Azure portal
209
-
210
-
1. Log in to the Azure portal and browse to the resource group where the Kubernetes connected cluster resource is located.
211
-
1. Select the Azure Arc-enabled Kubernetes cluster (Type = "Kubernetes - Azure Arc") where the extension was deployed.
212
-
1. In the navigation on the left side, under **Settings**, select **Extensions**.
213
-
1. The portal shows the extension that was created earlier in an installed state.
214
-
215
-
##### Check status using kubectl CLI
216
-
217
-
1. Connect to your Kubernetes cluster via a Terminal window.
218
-
1. Run the below command and ensure:
219
-
- The namespace mentioned above is created
220
-
221
-
and
222
-
223
-
- The `bootstrapper` pod state is **running** before proceeding to the next step.
224
-
225
-
```console
226
-
kubectl get pods --name <name of namespace used in the json template file above>
227
-
```
228
-
229
-
For example, the following example gets the pods from `arc` namespace.
230
-
231
-
```console
232
-
#Example:
233
-
kubectl get pods --name arc
234
-
```
235
-
236
-
### Retrieve the managed identity and grant roles
237
-
238
-
When the Arc data services extension is created, Azure creates a managed identity. You need to assign certain roles to this managed identity for usage and/or metrics to be uploaded.
239
-
240
-
#### Retrieve managed identity of the Arc data controller extension
241
-
242
-
```azurecli
243
-
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group <resource group> --cluster-name <connectedclustername> --cluster-type connectedClusters --name <name of extension> | convertFrom-json).identity.principalId
244
-
#Example
245
-
$Env:MSI_OBJECT_ID = (az k8s-extension show --resource-group myresourcegroup --cluster-name myconnectedcluster --cluster-type connectedClusters --name ads-extension | convertFrom-json).identity.principalId
246
-
```
247
-
248
-
#### Assign role to the managed identity
249
-
250
-
Run the below command to assign the **Contributor** and **Monitoring Metrics Publisher** roles:
251
-
252
-
```azurecli
253
-
az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Contributor" --scope "/subscriptions/$ENV:subscription/resourceGroups/$ENV:resourceGroup"
254
-
az role assignment create --assignee $Env:MSI_OBJECT_ID --role "Monitoring Metrics Publisher" --scope "/subscriptions/$ENV:subscription/resourceGroups/$ENV:resourceGroup"
255
-
```
256
-
257
-
### Step 2: Create a custom location using `customlocation` CLI extension
258
-
259
-
A custom location is an Azure resource that is equivalent to a namespace in a Kubernetes cluster. Custom locations are used as a target to deploy resources to or from Azure. Learn more about custom locations in the [Custom locations on top of Azure Arc-enabled Kubernetes documentation](../kubernetes/conceptual-custom-locations.md).
260
-
261
-
#### Set environment variables
262
-
263
-
##### [Linux](#tab/linux)
264
-
265
-
```azurecli
266
-
export clName=mycustomlocation
267
-
export hostClusterId=$(az connectedk8s show --resource-group ${resourceGroup} --name ${clusterName} --query id -o tsv)
268
-
export extensionId=$(az k8s-extension show --resource-group ${resourceGroup} --cluster-name ${clusterName} --cluster-type connectedClusters --name ${adsExtensionName} --query id -o tsv)
From the terminal, run the below command to list the custom locations, and validate that the **Provisioning State** shows Succeeded:
286
-
287
-
```azurecli
288
-
az customlocation list -o table
289
-
```
290
-
291
-
### Create certificates for logs and metrics UI dashboards
292
-
293
-
Optionally, you can specify certificates for logs and metrics UI dashboards. See [Provide certificates for monitoring](monitor-certificates.md) for examples. The December, 2021 release introduces this option.
294
-
295
-
### Step 3: Create the Azure Arc data controller
296
-
297
-
After the extension and custom location are created, proceed to deploy the Azure Arc data controller as follows.
298
-
299
-
```azurecli
300
-
az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --profile-name <profile name> --auto-upload-metrics true --custom-location <name of custom location> --storage-class <storageclass>
301
-
# Example
302
-
az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --custom-location mycustomlocation --storage-class mystorageclass
303
-
```
304
-
305
-
If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows:
306
-
307
-
308
-
```azurecli
309
-
az arcdata dc create --name <name> --resource-group <resourcegroup> --location <location> --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location <name of custom location>
310
-
# Example
311
-
az arcdata dc create --name arc-dc1 --resource-group my-resource-group --location eastasia --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --custom-location mycustomlocation
312
-
```
313
-
314
110
## Monitor the status of Azure Arc data controller deployment
315
111
316
112
The deployment status of the Arc data controller on the cluster can be monitored as follows:
@@ -323,4 +119,5 @@ kubectl get datacontrollers --namespace arc
323
119
324
120
[Create an Azure Arc-enabled PostgreSQL server](create-postgresql-server.md)
325
121
326
-
[Create an Azure SQL managed instance on Azure Arc](create-sql-managed-instance.md)
122
+
[Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
0 commit comments