Skip to content

Commit fde433f

Browse files
Merge pull request #204485 from MikeRayMSFT/20220712-least-privilege
Add least-privilege
2 parents d0dbf04 + 1b47d44 commit fde433f

File tree

3 files changed

+209
-0
lines changed

3 files changed

+209
-0
lines changed

articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@ Creating the data controller has the following high level steps:
2222

2323
> [!NOTE]
2424
> For simplicity, the steps below assume that you are a Kubernetes cluster administrator. For production deployments or more secure environments, it is recommended to follow the security best practices of "least privilege" when deploying the data controller by granting only specific permissions to users and service accounts involved in the deployment process.
25+
>
26+
> See the topic [Operate Arc-enabled data services with least privileges](least-privilege.md) for detailed instructions.
2527
2628

2729
## Prerequisites
Lines changed: 205 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,205 @@
1+
---
2+
title: Operate Azure Arc-enabled data services with least privileges
3+
description: Explains how to operate Azure Arc-enabled data services with least privileges
4+
services: azure-arc
5+
ms.service: azure-arc
6+
ms.subservice: azure-arc-data
7+
author: twright-msft
8+
ms.author: twright
9+
ms.reviewer: mikeray
10+
ms.date: 11/07/2021
11+
ms.topic: how-to
12+
---
13+
14+
# Operate Azure Arc-enabled data services with least privileges
15+
16+
Operating Arc-enabled data services with least privileges is a security best practice. Only grant users and service accounts the specific permissions required to perform the required tasks. Both Azure and Kubernetes provide a role-based access control model which can be used to grant these specific permissions. This article describes certain common scenarios in which the security of least privilege should be applied.
17+
18+
> [!NOTE]
19+
> In this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
20+
> In this article, the `kubectl` CLI utility is used as the example. Any tool or system that uses the Kubernetes API can be used though.
21+
22+
## Deploy the Azure Arc data controller
23+
24+
Deploying the Azure Arc data controller requires some permissions which can be considered high privilege such as creating a Kubernetes namespace or creating cluster role. The following steps can be followed to separate the deployment of the data controller into multiple steps, each of which can be performed by a user or a service account which has the required permissions. This separation of duties ensures that each user or service account in the process has just the permissions required and nothing more.
25+
26+
### Deploy a namespace in which the data controller will be created
27+
28+
This step will create a new, dedicated Kubernetes namespace into which the Arc data controller will be deployed. It is essential to perform this step first, because the following steps will use this new namespace as a scope for the permissions that are being granted.
29+
30+
Permissions required to perform this action:
31+
32+
- Namespace
33+
- Create
34+
- Edit (if required for OpenShift clusters)
35+
36+
Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created.
37+
38+
```console
39+
kubectl create namespace arc
40+
```
41+
42+
If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges.
43+
44+
```console
45+
openshift.io/sa.scc.supplemental-groups: 1000700001/10000
46+
openshift.io/sa.scc.uid-range: 1000700001/10000
47+
```
48+
49+
## Assign permissions to the deploying service account and users/groups
50+
51+
This step will create a service account and assign roles and cluster roles to the service account so that the service account can be used in a job to deploy the Arc data controller with the least privileges required.
52+
53+
Permissions required to perform this action:
54+
55+
- Service account
56+
- Create
57+
- Role
58+
- Create
59+
- Role binding
60+
- Create
61+
- Cluster role
62+
- Create
63+
- Cluster role binding
64+
- Create
65+
- All the permissions being granted to the service account (see the arcdata-deployer.yaml below for details)
66+
67+
Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace created in the previous step, for example: `arc`. Run the following command to create the deployer service account with the edited file.
68+
69+
```console
70+
kubectl apply --namespace arc -f arcdata-deployer.yaml
71+
```
72+
73+
## Grant permissions to users to create the bootstrapper job and data controller
74+
75+
Permissions required to perform this action:
76+
77+
- Role
78+
- Create
79+
- Role binding
80+
- Create
81+
82+
Save a copy of [arcdata-installer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/arcdata-installer.yaml), and replace the placeholder `{{INSTALLER_USERNAME}}` in the file with the name of the user to grant the permissions to, for example: `[email protected]`. Add additional role binding subjects such as other users or groups as needed. Run the following command to create the installer permissions with the edited file.
83+
84+
```console
85+
kubectl apply --namespace arc -f arcdata-installer.yaml
86+
```
87+
88+
## Deploy the bootstrapper job
89+
90+
Permissions required to perform this action:
91+
92+
- User that is assigned to the arcdata-installer-role role in the previous step
93+
94+
Run the following command to create the bootstrapper job that will run preparatory steps to deploy the data controller.
95+
96+
```console
97+
kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
98+
```
99+
100+
## Create the Arc data controller
101+
102+
Now you are ready to create the data controller itself.
103+
104+
First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
105+
106+
### Create the metrics and logs dashboards user names and passwords
107+
108+
At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges.
109+
110+
A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password.
111+
112+
```consoole
113+
echo -n '<your string to encode here>' | base64
114+
# echo -n 'example' | base64
115+
```
116+
117+
Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md).
118+
119+
### Edit the data controller configuration
120+
121+
Edit the data controller configuration as needed:
122+
123+
#### REQUIRED
124+
125+
- `location`: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).
126+
- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
127+
- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
128+
129+
#### Recommended: review and possibly change defaults
130+
131+
Review these values, and update for your deployment:
132+
133+
- `storage..className`: the storage class to use for the data controller data and log files. If you are unsure of the available storage classes in your Kubernetes cluster, you can run the following command: `kubectl get storageclass`. The default is default which assumes there is a storage class that exists and is named default not that there is a storage class that is the default. Note: There are two className settings to be set to the desired storage class - one for data and one for logs.
134+
- `serviceType`: Change the service type to NodePort if you are not using a LoadBalancer.
135+
- Security For Azure Red Hat OpenShift or Red Hat OpenShift Container Platform, replace the security: settings with the following values in the data controller yaml file.
136+
137+
```yml
138+
security:
139+
allowDumps: false
140+
allowNodeMetricsCollection: false
141+
allowPodMetricsCollection: false
142+
```
143+
144+
#### Optional
145+
146+
The following settings are optional.
147+
148+
- `name`: The default name of the data controller is arc, but you can change it if you want.
149+
- `displayName`: Set this to the same value as the name attribute at the top of the file.
150+
- `registry`: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and pushing them to a private container registry, enter the IP address or DNS name of your registry here.
151+
- `dockerRegistry`: The secret to use to pull the images from a private container registry if required.
152+
- `repository`: The default repository on the Microsoft Container Registry is arcdata. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images.
153+
- `imageTag`: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version.
154+
- `logsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the logs UI certificate.
155+
- `metricsui-certificate-secret`: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
156+
157+
The following example shows a completed data controller yaml.
158+
159+
:::code language="yaml" source="~/azure_arc_sample/arc_data_services/deploy/yaml/data-controller.yaml":::
160+
161+
Save the edited file on your local computer and run the following command to create the data controller:
162+
163+
```console
164+
kubectl create --namespace arc -f <path to your data controller file>
165+
166+
#Example
167+
kubectl create --namespace arc -f data-controller.yaml
168+
```
169+
170+
### Monitoring the creation status
171+
172+
Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
173+
174+
```console
175+
kubectl get datacontroller --namespace arc
176+
```
177+
178+
```console
179+
kubectl get pods --namespace arc
180+
```
181+
182+
You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
183+
184+
```console
185+
kubectl describe pod/<pod name> --namespace arc
186+
kubectl logs <pod name> --namespace arc
187+
188+
#Example:
189+
#kubectl describe pod/control-2g7bl --namespace arc
190+
#kubectl logs control-2g7b1 --namespace arc
191+
```
192+
193+
## Next steps
194+
195+
You have several additional options for creating the Azure Arc data controller:
196+
197+
> **Just want to try things out?**
198+
> Get started quickly with [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_data/) on AKS, Amazon EKS, or GKE, or in an Azure VM.
199+
>
200+
201+
- [Create a data controller in direct connectivity mode with the Azure portal](create-data-controller-direct-prerequisites.md)
202+
- [Create a data controller in indirect connectivity mode with CLI](create-data-controller-indirect-cli.md)
203+
- [Create a data controller in indirect connectivity mode with Azure Data Studio](create-data-controller-indirect-azure-data-studio.md)
204+
- [Create a data controller in indirect connectivity mode from the Azure portal via a Jupyter notebook in Azure Data Studio](create-data-controller-indirect-azure-portal.md)
205+
- [Create a data controller in indirect connectivity mode with Kubernetes tools such as `kubectl` or `oc`](create-data-controller-using-kubernetes-native-tools.md)

articles/azure-arc/data/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ items:
2828
href: storage-configuration.md
2929
- name: Sizing guidance
3030
href: sizing-guidance.md
31+
- name: Operate with least privilege
32+
href: least-privilege.md
3133
- name: How-to guides
3234
items:
3335
- name: Install tools

0 commit comments

Comments
 (0)