Skip to content
This repository was archived by the owner on Apr 1, 2025. It is now read-only.

Commit 9f1ae27

Browse files
renamed terraform variable, renamed AKS node resource group
1 parent 721a52d commit 9f1ae27

File tree

11 files changed

+71
-48
lines changed

11 files changed

+71
-48
lines changed

.github/workflows/terraform.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ jobs:
9393

9494
- name: Terraform Plan
9595
id: plan
96-
run: terraform -chdir=scripts plan -no-color -var "resourcesuffix=dev" -var "backend_account_key=${{ secrets.TF_BACKEND_KEY }}"
96+
run: terraform -chdir=scripts plan -no-color -var "environment=dev" -var "backend_account_key=${{ secrets.TF_BACKEND_KEY }}"
9797

9898

9999
- name: Taint the connector instance
@@ -105,4 +105,4 @@ jobs:
105105
- name: Terraform Apply
106106
id: apply
107107
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
108-
run: terraform -chdir=scripts apply -var "resourcesuffix=dev" -var "backend_account_key=${{ secrets.TF_BACKEND_KEY }}" -auto-approve
108+
run: terraform -chdir=scripts apply -var "environment=dev" -var "backend_account_key=${{ secrets.TF_BACKEND_KEY }}" -auto-approve

scripts/README.md

Lines changed: 45 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,63 +1,86 @@
11
# Deploy an example configuration with Terraform
22

33
It is assumed that the reader has a basic understanding of the following topics:
4+
45
- Azure
56
- Kubernetes + Helm Charts
67
- Hashicorp Terraform
78

89
## Create a certificate for the main security principal
9-
The main security principal is the security context that is used to identify the connector agains Azure AD and uses OAuth2 Client Credentials flow.
10-
In order to make this as secure as possible the connector authenticates using a certificate. Thus, a `.pem` certificate is required.
1110

12-
For development purposes a self-signed certificate can be created by executing the following commands on the command line:
11+
The main security principal is the security context that is used to identify the connector agains Azure AD and uses
12+
OAuth2 Client Credentials flow. In order to make this as secure as possible the connector authenticates using a
13+
certificate. Thus, a `.pem` certificate is required.
14+
15+
For development purposes a self-signed certificate can be created by executing the following commands on the command
16+
line:
17+
1318
```bash
1419
openssl req -newkey rsa:4096 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
1520
openssl pkcs12 -inkey key.pem -in cert.pem -export -out cert.pfx
1621
```
17-
This generates a certificate (`cert.pem`), a private key (`key.pem`) and it also converts the `*.pem` certificate to a "pixie" (=`*.pfx`) certificate, because the Azure
18-
libs require that.
22+
23+
This generates a certificate (`cert.pem`), a private key (`key.pem`) and it also converts the `*.pem` certificate to a "
24+
pixie" (=`*.pfx`) certificate, because the Azure libs require that.
1925

2026
**For now it is required that the certificate is named `"cert.pem"` and is located at the root directory `terraform/`.**
2127

2228
## Login to the Azure CLI
29+
2330
Install Azure CLI and execute `az login` on a shell.
2431

2532
## Initialize Terraform
33+
2634
Terraform must be installed. Then download the required providers by executing `terraform init`
2735

2836
## Deploy the cluster and associated resources
29-
Users can run `terraform plan` to create a "dry-run", which lists all resources that will be created in Azure. This is not required, but gives a good
30-
overview of what is going to happen.
3137

32-
The actual deployment is triggered by running
38+
Users can run `terraform plan` to create a "dry-run", which lists all resources that will be created in Azure. This is
39+
not required, but gives a good overview of what is going to happen.
40+
41+
The actual deployment is triggered by running
42+
3343
```bash
3444
terraform apply
3545
```
36-
which will prompt the user to enter a value for `resourcesuffix`. It is best to enter a short identifier without special characters, e.g. `test123`.
37-
46+
47+
which will prompt the user to enter a value for `environment`. It is best to enter a short identifier without special
48+
characters, e.g. `test123`.
49+
3850
The terraform project will then deploy three resource groups in Azure:
39-
- `dagx-<suffix>-resources`: This is where the key vault and the blobstore will be
51+
52+
- `dagx-<suffix>-resources`: This is where the key vault and the blobstore will be
4053
- `dagx-<suffix>-cluster`: will contain the AKS cluster
41-
- `MC-dagx-<suffix>-cluster_dagx-<suffix>-cluster_<region>`: will contain networking resources, virtual disks, scale sets, etc.
54+
- `MC-dagx-<suffix>-cluster_dagx-<suffix>-cluster_<region>`: will contain networking resources, virtual disks, scale
55+
sets, etc.
4256

43-
`<suffix>` refers to a parameter that can be specified when running `terraform apply`. It simply is a name that is used to identify resources.
44-
`<region>` is the geographical region of the cluster and associated resources. Can be specified by running `terraform apply -var 'region=eastus'`.
57+
`<suffix>` refers to a parameter that can be specified when running `terraform apply`. It simply is a name that is used
58+
to identify resources.
59+
`<region>` is the geographical region of the cluster and associated resources. Can be specified by
60+
running `terraform apply -var 'region=eastus'`.
4561

4662
**It takes quite a long time to deploy all resources, 5-10 minutes at least!**
4763

4864
## Configure a DNS name (manually)
49-
At this point it is required that the DNS name for the cluster load balancer's ingress route (IP) is configured manually.
50-
In the resource group whos name begins with `MC-dagx-<suffix>...` there should be a public ip address, whos name starts with `kubernetes_...`.
5165

52-
Open that, open its Configuration and in the `DNS name label` field enter `dagx-<suffix>`, so for example `dagx-test123` so the resulting DNS name (=FQDN)
66+
At this point it is required that the DNS name for the cluster load balancer's ingress route (IP) is configured
67+
manually. In the resource group whos name begins with `MC-dagx-<suffix>...` there should be a public ip address, whos
68+
name starts with `kubernetes_...`.
69+
70+
Open that, open its Configuration and in the `DNS name label` field enter `dagx-<suffix>`, so for example `dagx-test123`
71+
so the resulting DNS name (=FQDN)
5372
should be `dagx-test123.<region>.cloudapp.azure.com`.
5473

5574
## Re-using AKS credentials in kubernetes and helm
56-
After the AKS is deployed, we must obtain its credentials before we can deploy any kubernetes workloads. Normally we would do that by running
75+
76+
After the AKS is deployed, we must obtain its credentials before we can deploy any kubernetes workloads. Normally we
77+
would do that by running
5778
`az aks get-credentials -n <cluster-name> -g <resourcegroup>`.
5879

59-
However, since both the AKS and Nifi get deployed in one command (i.e. `terraform apply`), there is no chance to obtain credentials manually. According to
60-
[this example from Hashicorp](https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/aks/main.tf) it is good practice to deploy the AKS
61-
and the workload in two different Terraform _contexts_ (=modules), which in our case are named `aks-cluster` and `nifi-config`. Basically this deploys the AKS, stores the credentials in a
62-
local file `kubeconfig` and the deploys Nifi re-using that config.
80+
However, since both the AKS and Nifi get deployed in one command (i.e. `terraform apply`), there is no chance to obtain
81+
credentials manually. According to
82+
[this example from Hashicorp](https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/aks/main.tf)
83+
it is good practice to deploy the AKS and the workload in two different Terraform _contexts_ (=modules), which in our
84+
case are named `aks-cluster` and `nifi-config`. Basically this deploys the AKS, stores the credentials in a local
85+
file `kubeconfig` and the deploys Nifi re-using that config.
6386

scripts/aks-cluster/main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ resource "azurerm_kubernetes_cluster" "default" {
3232
location = var.location
3333
resource_group_name = azurerm_resource_group.clusterrg.name
3434
dns_prefix = var.cluster_name
35-
35+
node_resource_group = "${var.cluster_name}-node-rg"
3636
default_node_pool {
3737
name = "agentpool"
3838
node_count = 2

scripts/atlas-deployment/main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ resource "tls_self_signed_cert" "atlas-ingress" {
4141

4242
resource "kubernetes_namespace" "atlas" {
4343
metadata {
44-
name = "${var.resourcesuffix}-atlas"
44+
name = "${var.environment}-atlas"
4545
}
4646
}
4747

scripts/atlas-deployment/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ variable "kubeconfig" {
1616
type = string
1717
}
1818

19-
variable "resourcesuffix" {
19+
variable "environment" {
2020
type = string
2121
}
2222

scripts/connector-deployment/main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ resource "tls_private_key" "connector-ingress-key" {
2424

2525
resource "kubernetes_namespace" "connector" {
2626
metadata {
27-
name = "${var.resourcesuffix}-cons"
27+
name = "${var.environment}-cons"
2828
}
2929
}
3030

scripts/connector-deployment/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ variable "kubeconfig" {
1111
type = string
1212
}
1313

14-
variable "resourcesuffix" {
14+
variable "environment" {
1515
type = string
1616
}
1717

scripts/main.tf

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -133,13 +133,13 @@ data "azurerm_kubernetes_cluster" "connector" {
133133
data "azurerm_client_config" "current" {}
134134

135135
resource "azurerm_resource_group" "core-resourcegroup" {
136-
name = "${var.resourcesuffix}-resources"
136+
name = "${var.environment}-resources"
137137
location = var.location
138138
}
139139

140140
# App registration for the primary identity
141141
resource "azuread_application" "dagx-terraform-app" {
142-
display_name = "PrimaryIdentity-${var.resourcesuffix}"
142+
display_name = "PrimaryIdentity-${var.environment}"
143143
available_to_other_tenants = false
144144
}
145145

@@ -160,7 +160,7 @@ resource "azuread_service_principal" "dagx-terraform-app-sp" {
160160

161161
# Keyvault
162162
resource "azurerm_key_vault" "dagx-terraform-vault" {
163-
name = "dagx-${var.resourcesuffix}-vault"
163+
name = "dagx-${var.environment}-vault"
164164
location = azurerm_resource_group.core-resourcegroup.location
165165
resource_group_name = azurerm_resource_group.core-resourcegroup.name
166166
enabled_for_disk_encryption = false
@@ -259,7 +259,7 @@ resource "azurerm_container_group" "dagx-nifi" {
259259
name = "dagx-nifi-continst"
260260
os_type = "Linux"
261261
resource_group_name = azurerm_resource_group.core-resourcegroup.name
262-
dns_name_label = "${var.resourcesuffix}-dagx-nifi"
262+
dns_name_label = "${var.environment}-dagx-nifi"
263263
container {
264264
cpu = 4
265265
image = "ghcr.io/microsoft/data-appliance-gx/nifi:latest"
@@ -303,15 +303,15 @@ module "atlas-cluster" {
303303
source = "./aks-cluster"
304304
cluster_name = local.cluster_name_atlas
305305
location = var.location
306-
dnsPrefix = "${var.resourcesuffix}-dagx-atlas"
306+
dnsPrefix = "${var.environment}-dagx-atlas"
307307
}
308308
module "atlas-deployment" {
309309
depends_on = [
310310
module.atlas-cluster]
311311
source = "./atlas-deployment"
312312
cluster_name = local.cluster_name_atlas
313313
kubeconfig = data.azurerm_kubernetes_cluster.atlas.kube_config_raw
314-
resourcesuffix = var.resourcesuffix
314+
environment = var.environment
315315
tenant_id = data.azurerm_client_config.current.tenant_id
316316
providers = {
317317
kubernetes = kubernetes.atlas
@@ -323,18 +323,18 @@ module "atlas-deployment" {
323323
module "connector-cluster" {
324324
source = "./aks-cluster"
325325
cluster_name = local.cluster_name_connector
326-
dnsPrefix = "${var.resourcesuffix}-connector"
326+
dnsPrefix = "${var.environment}-connector"
327327
location = var.location
328328
}
329329

330330
module "connector-deployment" {
331331
depends_on = [
332332
module.connector-cluster]
333-
source = "./connector-deployment"
334-
cluster_name = local.cluster_name_connector
335-
kubeconfig = data.azurerm_kubernetes_cluster.connector.kube_config_raw
336-
resourcesuffix = var.resourcesuffix
337-
tenant_id = data.azurerm_client_config.current.tenant_id
333+
source = "./connector-deployment"
334+
cluster_name = local.cluster_name_connector
335+
kubeconfig = data.azurerm_kubernetes_cluster.connector.kube_config_raw
336+
environment = var.environment
337+
tenant_id = data.azurerm_client_config.current.tenant_id
338338
providers = {
339339
kubernetes = kubernetes.connector
340340
helm = helm.connector

scripts/nifi-deployment/main.tf

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ terraform {
1313

1414
resource "kubernetes_namespace" "nifi" {
1515
metadata {
16-
name = "${var.resourcesuffix}-nifi"
16+
name = "${var.environment}-nifi"
1717
}
1818
}
1919

@@ -80,7 +80,7 @@ resource "kubernetes_ingress" "ingress-route" {
8080

8181
# App registration for the loadbalancer
8282
resource "azuread_application" "dagx-terraform-nifi-app" {
83-
display_name = "Dagx-${var.resourcesuffix}-Nifi"
83+
display_name = "Dagx-${var.environment}-Nifi"
8484
available_to_other_tenants = false
8585
reply_urls = [
8686
"https://${var.public-ip.fqdn}/nifi-api/access/oidc/callback"]

scripts/nifi-deployment/variables.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ variable "kubeconfig" {
1616
type = string
1717
}
1818

19-
variable "resourcesuffix"{
19+
variable "environment"{
2020
type= string
2121
}
2222

0 commit comments

Comments
 (0)