|
1 | 1 | # Deploy an example configuration with Terraform
|
2 | 2 |
|
3 | 3 | It is assumed that the reader has a basic understanding of the following topics:
|
| 4 | + |
4 | 5 | - Azure
|
5 | 6 | - Kubernetes + Helm Charts
|
6 | 7 | - Hashicorp Terraform
|
7 | 8 |
|
8 | 9 | ## Create a certificate for the main security principal
|
9 |
| -The main security principal is the security context that is used to identify the connector agains Azure AD and uses OAuth2 Client Credentials flow. |
10 |
| -In order to make this as secure as possible the connector authenticates using a certificate. Thus, a `.pem` certificate is required. |
11 | 10 |
|
12 |
| -For development purposes a self-signed certificate can be created by executing the following commands on the command line: |
| 11 | +The main security principal is the security context that is used to identify the connector agains Azure AD and uses |
| 12 | +OAuth2 Client Credentials flow. In order to make this as secure as possible the connector authenticates using a |
| 13 | +certificate. Thus, a `.pem` certificate is required. |
| 14 | + |
| 15 | +For development purposes a self-signed certificate can be created by executing the following commands on the command |
| 16 | +line: |
| 17 | + |
13 | 18 | ```bash
|
14 | 19 | openssl req -newkey rsa:4096 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
|
15 | 20 | openssl pkcs12 -inkey key.pem -in cert.pem -export -out cert.pfx
|
16 | 21 | ```
|
17 |
| -This generates a certificate (`cert.pem`), a private key (`key.pem`) and it also converts the `*.pem` certificate to a "pixie" (=`*.pfx`) certificate, because the Azure |
18 |
| -libs require that. |
| 22 | + |
| 23 | +This generates a certificate (`cert.pem`), a private key (`key.pem`) and it also converts the `*.pem` certificate to a " |
| 24 | +pixie" (=`*.pfx`) certificate, because the Azure libs require that. |
19 | 25 |
|
20 | 26 | **For now it is required that the certificate is named `"cert.pem"` and is located at the root directory `terraform/`.**
|
21 | 27 |
|
22 | 28 | ## Login to the Azure CLI
|
| 29 | + |
23 | 30 | Install Azure CLI and execute `az login` on a shell.
|
24 | 31 |
|
25 | 32 | ## Initialize Terraform
|
| 33 | + |
26 | 34 | Terraform must be installed. Then download the required providers by executing `terraform init`
|
27 | 35 |
|
28 | 36 | ## Deploy the cluster and associated resources
|
29 |
| -Users can run `terraform plan` to create a "dry-run", which lists all resources that will be created in Azure. This is not required, but gives a good |
30 |
| -overview of what is going to happen. |
31 | 37 |
|
32 |
| -The actual deployment is triggered by running |
| 38 | +Users can run `terraform plan` to create a "dry-run", which lists all resources that will be created in Azure. This is |
| 39 | +not required, but gives a good overview of what is going to happen. |
| 40 | + |
| 41 | +The actual deployment is triggered by running |
| 42 | + |
33 | 43 | ```bash
|
34 | 44 | terraform apply
|
35 | 45 | ```
|
36 |
| -which will prompt the user to enter a value for `resourcesuffix`. It is best to enter a short identifier without special characters, e.g. `test123`. |
37 |
| - |
| 46 | + |
| 47 | +which will prompt the user to enter a value for `environment`. It is best to enter a short identifier without special |
| 48 | +characters, e.g. `test123`. |
| 49 | + |
38 | 50 | The terraform project will then deploy three resource groups in Azure:
|
39 |
| -- `dagx-<suffix>-resources`: This is where the key vault and the blobstore will be |
| 51 | + |
| 52 | +- `dagx-<suffix>-resources`: This is where the key vault and the blobstore will be |
40 | 53 | - `dagx-<suffix>-cluster`: will contain the AKS cluster
|
41 |
| -- `MC-dagx-<suffix>-cluster_dagx-<suffix>-cluster_<region>`: will contain networking resources, virtual disks, scale sets, etc. |
| 54 | +- `MC-dagx-<suffix>-cluster_dagx-<suffix>-cluster_<region>`: will contain networking resources, virtual disks, scale |
| 55 | + sets, etc. |
42 | 56 |
|
43 |
| -`<suffix>` refers to a parameter that can be specified when running `terraform apply`. It simply is a name that is used to identify resources. |
44 |
| -`<region>` is the geographical region of the cluster and associated resources. Can be specified by running `terraform apply -var 'region=eastus'`. |
| 57 | +`<suffix>` refers to a parameter that can be specified when running `terraform apply`. It simply is a name that is used |
| 58 | +to identify resources. |
| 59 | +`<region>` is the geographical region of the cluster and associated resources. Can be specified by |
| 60 | +running `terraform apply -var 'region=eastus'`. |
45 | 61 |
|
46 | 62 | **It takes quite a long time to deploy all resources, 5-10 minutes at least!**
|
47 | 63 |
|
48 | 64 | ## Configure a DNS name (manually)
|
49 |
| -At this point it is required that the DNS name for the cluster load balancer's ingress route (IP) is configured manually. |
50 |
| -In the resource group whos name begins with `MC-dagx-<suffix>...` there should be a public ip address, whos name starts with `kubernetes_...`. |
51 | 65 |
|
52 |
| -Open that, open its Configuration and in the `DNS name label` field enter `dagx-<suffix>`, so for example `dagx-test123` so the resulting DNS name (=FQDN) |
| 66 | +At this point it is required that the DNS name for the cluster load balancer's ingress route (IP) is configured |
| 67 | +manually. In the resource group whos name begins with `MC-dagx-<suffix>...` there should be a public ip address, whos |
| 68 | +name starts with `kubernetes_...`. |
| 69 | + |
| 70 | +Open that, open its Configuration and in the `DNS name label` field enter `dagx-<suffix>`, so for example `dagx-test123` |
| 71 | +so the resulting DNS name (=FQDN) |
53 | 72 | should be `dagx-test123.<region>.cloudapp.azure.com`.
|
54 | 73 |
|
55 | 74 | ## Re-using AKS credentials in kubernetes and helm
|
56 |
| -After the AKS is deployed, we must obtain its credentials before we can deploy any kubernetes workloads. Normally we would do that by running |
| 75 | + |
| 76 | +After the AKS is deployed, we must obtain its credentials before we can deploy any kubernetes workloads. Normally we |
| 77 | +would do that by running |
57 | 78 | `az aks get-credentials -n <cluster-name> -g <resourcegroup>`.
|
58 | 79 |
|
59 |
| -However, since both the AKS and Nifi get deployed in one command (i.e. `terraform apply`), there is no chance to obtain credentials manually. According to |
60 |
| -[this example from Hashicorp](https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/aks/main.tf) it is good practice to deploy the AKS |
61 |
| -and the workload in two different Terraform _contexts_ (=modules), which in our case are named `aks-cluster` and `nifi-config`. Basically this deploys the AKS, stores the credentials in a |
62 |
| -local file `kubeconfig` and the deploys Nifi re-using that config. |
| 80 | +However, since both the AKS and Nifi get deployed in one command (i.e. `terraform apply`), there is no chance to obtain |
| 81 | +credentials manually. According to |
| 82 | +[this example from Hashicorp](https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/aks/main.tf) |
| 83 | +it is good practice to deploy the AKS and the workload in two different Terraform _contexts_ (=modules), which in our |
| 84 | +case are named `aks-cluster` and `nifi-config`. Basically this deploys the AKS, stores the credentials in a local |
| 85 | +file `kubeconfig` and the deploys Nifi re-using that config. |
63 | 86 |
|
0 commit comments