Skip to content

Commit 774ebcb

Browse files
feat: (IAC-793): support multiple worker/agent availability zones (#270)
* feat: (IAC-793): support multiple worker/agent availability zones --------- Co-authored-by: Ritika Patil <[email protected]>
1 parent f53d1f2 commit 774ebcb

File tree

3 files changed

+14
-8
lines changed

3 files changed

+14
-8
lines changed

docs/CONFIG-VARS.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ To do set these permissions as part of this Terraform script, specify ranges of
6363

6464
NOTE: When deploying infrastructure into a private network (e.g. a VPN), with no public endpoints, the options documented in this block are not applicable.
6565

66-
NOTE: The script will either create a new NSG, or use an existing NSG, if specified in the [`nsg_name`](#use-existing) variable.
66+
NOTE: The script will either create a new NSG, or use an existing NSG, if specified in the [`nsg_name`](#use-existing) variable.
6767

6868
You can use `default_public_access_cidrs` to set a default range for all created resources. To set different ranges for other resources, define the appropriate variable. Use an empty list `[]` to disallow access explicitly.
6969

@@ -75,7 +75,7 @@ You can use `default_public_access_cidrs` to set a default range for all created
7575
| postgres_public_access_cidrs | IP address ranges allowed to access the Azure PostgreSQL Flexible Server | list of strings || Opens port 5432 by adding Ingress Rule on the NSG. Only used when creating postgres instances. |
7676
| acr_public_access_cidrs | IP address ranges allowed to access the ACR instance | list of strings || Only used with `create_container_registry=true` |
7777

78-
**NOTE:** In a SCIM environment, the AzureActiveDirectory service tag must be granted access to port 443/HTTPS for the Ingress IP address.
78+
**NOTE:** In a SCIM environment, the AzureActiveDirectory service tag must be granted access to port 443/HTTPS for the Ingress IP address.
7979

8080
## Networking
8181

@@ -147,7 +147,7 @@ Example for the `subnet_names` variable:
147147
```yaml
148148
subnet_names = {
149149
## Required subnets
150-
'aks': '<my_aks_subnet_name>',
150+
'aks': '<my_aks_subnet_name>',
151151
'misc': '<my_misc_subnet_name>',
152152

153153
## If using ha storage then the following is also required
@@ -261,7 +261,7 @@ In addition, you can control the placement for the additional node pools using t
261261

262262
| Name | Description | Type | Default | Notes |
263263
| :--- | ---: | ---: | ---: | ---: |
264-
| node_pools_availability_zone | Availability Zone for the additional node pools and the NFS VM, for `storage_type="standard"'| string | "1" | The possible values depend on the region set in the "location" variable. |
264+
| node_pools_availability_zone | Availability Zone for the additional node pools and the NFS VM, for `storage_type="standard"`| string | "1" | The possible values depend on the region set in the "location" variable. |
265265
| node_pools_proximity_placement | Co-locates all node pool VMs for improved application performance. | bool | false | Selecting proximity placement imposes an additional constraint on VM creation and can lead to more frequent denials of VM allocation requests. We recommend that you set `node_pools_availability_zone=""` and allocate all required resources at one time by setting `min_nodes` and `max_nodes` to the same value for all node pools. Additional information: [Proximity Group Placement](./user/ProximityPlacementGroup.md). |
266266

267267
## Storage

main.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,7 +196,7 @@ module "node_pools" {
196196
max_pods = each.value.max_pods == null ? 110 : each.value.max_pods
197197
node_taints = each.value.node_taints
198198
node_labels = each.value.node_labels
199-
zones = (var.node_pools_availability_zone == "" || var.node_pools_proximity_placement == true) ? [] : [var.node_pools_availability_zone]
199+
zones = (var.node_pools_availability_zone == "" || var.node_pools_proximity_placement == true) ? [] : (var.node_pools_availability_zones != null) ? var.node_pools_availability_zones : [var.node_pools_availability_zone]
200200
proximity_placement_group_id = element(coalescelist(azurerm_proximity_placement_group.proximity.*.id, [""]), 0)
201201
orchestrator_version = var.kubernetes_version
202202
tags = var.tags

variables.tf

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ variable "default_nodepool_max_pods" {
113113
}
114114

115115
variable "default_nodepool_availability_zones" {
116-
type = list(any)
116+
type = list(string)
117117
default = ["1"]
118118
}
119119

@@ -373,6 +373,12 @@ variable "node_pools_availability_zone" {
373373
default = "1"
374374
}
375375

376+
variable "node_pools_availability_zones" {
377+
description = "Specifies a list of Availability Zones in which the Kubernetes Cluster Node Pool should be located. Changing this forces a new Kubernetes Cluster Node Pool to be created."
378+
type = list(string)
379+
default = null
380+
}
381+
376382
variable "node_pools_proximity_placement" {
377383
type = bool
378384
default = false
@@ -560,8 +566,8 @@ variable "subnet_names" {
560566
description = "Map subnet usage roles to existing subnet names"
561567
# Example:
562568
# subnet_names = {
563-
# 'aks': 'my_aks_subnet',
564-
# 'misc': 'my_misc_subnet',
569+
# 'aks': 'my_aks_subnet',
570+
# 'misc': 'my_misc_subnet',
565571
# 'netapp': 'my_netapp_subnet'
566572
# }
567573
}

0 commit comments

Comments
 (0)