You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Microsoft Azure](https://azure.microsoft.com/) has similar settings for Kubernetes version, region, and instance names - using Azure's available values of course.
106
-
107
-
Azure also requires a field named `storage_account_postfix` which will have been generated by `nebari init`. This allows nebari to create a Storage Account bucket that should be globally unique.
108
-
109
-
```yaml
110
-
### Provider configuration ###
111
-
azure:
112
-
region: Central US
113
-
kubernetes_version: 1.19.11
114
-
node_groups:
115
-
general:
116
-
instance: Standard_D4_v3
117
-
min_nodes: 1
118
-
max_nodes: 1
119
-
user:
120
-
instance: Standard_D2_v2
121
-
min_nodes: 0
122
-
max_nodes: 5
123
-
worker:
124
-
instance: Standard_D2_v2
125
-
min_nodes: 0
126
-
max_nodes: 5
127
-
storage_account_postfix: t65ft6q5
128
-
```
129
-
130
101
### Launch Templates (Optional)
131
102
132
103
Nebari supports configuring launch templates for your node groups, enabling you to customize settings like the AMI ID and pre-bootstrap commands. This is particularly useful if you need to use a custom AMI or perform specific actions before the node joins the cluster.
@@ -162,14 +133,43 @@ cluster region by inspecting its respective SSM parameter. For more information,
162
133
[Retrieve recommended Amazon Linux AMI IDs](https://docs.aws.amazon.com/eks/latest/userguide/retrieve-ami-id.html).
163
134
:::
164
135
165
-
:::warning **Important:** If you add a `launch_template` to an existing node group that was previously created without one, AWS will treat this as a change requiring the replacement of the entire node group. This action will trigger a reallocation of resources, effectively destroying the current node group and recreating it. This behavior is due to how AWS handles self-managed node groups versus those using launch templates with custom settings.
136
+
:::warning If you add a `launch_template` to an existing node group that was previously created without one, AWS will treat this as a change requiring the replacement of the entire node group. This action will trigger a reallocation of resources, effectively destroying the current node group and recreating it. This behavior is due to how AWS handles self-managed node groups versus those using launch templates with custom settings.
166
137
:::
167
138
168
-
:::hint **Recommendation:** To avoid unexpected downtime or data loss, consider creating a new node group with the launch template settings and migrating your workloads accordingly. This approach allows you to implement the new configuration without disrupting your existing resources.
139
+
:::tip To avoid unexpected downtime or data loss, consider creating a new node group with the launch template settings and migrating your workloads accordingly. This approach allows you to implement the new configuration without disrupting your existing resources.
169
140
:::
170
141
171
142
</TabItem>
172
143
144
+
<TabItem value="azure" label="Azure">
145
+
146
+
[Microsoft Azure](https://azure.microsoft.com/) has similar settings for Kubernetes version, region, and instance names - using Azure's available values of course.
147
+
148
+
Azure also requires a field named `storage_account_postfix` which will have been generated by `nebari init`. This allows nebari to create a Storage Account bucket that should be globally unique.
149
+
150
+
```yaml
151
+
### Provider configuration ###
152
+
azure:
153
+
region: Central US
154
+
kubernetes_version: 1.19.11
155
+
node_groups:
156
+
general:
157
+
instance: Standard_D4_v3
158
+
min_nodes: 1
159
+
max_nodes: 1
160
+
user:
161
+
instance: Standard_D2_v2
162
+
min_nodes: 0
163
+
max_nodes: 5
164
+
worker:
165
+
instance: Standard_D2_v2
166
+
min_nodes: 0
167
+
max_nodes: 5
168
+
storage_account_postfix: t65ft6q5
169
+
```
170
+
171
+
</TabItem>
172
+
173
173
<TabItem value="do" label="DigitalOcean">
174
174
175
175
DigitalOcean has a restriction with autoscaling in that the minimum nodes allowed (`min_nodes` = 1) is one but is by far the least expensive provider even accounting for `spot/pre-emptible` instances.
0 commit comments