Skip to content

Commit 1ab35d0

Browse files
authored
Merge pull request #114203 from jluk/update-dashboard
Update limitations of BYO IP on SLB & add ds workaround on support
2 parents 8c17f2d + d817e8e commit 1ab35d0

File tree

2 files changed

+26
-15
lines changed

2 files changed

+26
-15
lines changed

articles/aks/load-balancer-standard.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -86,12 +86,17 @@ When using a *Standard* SKU load balancer, the AKS cluster automatically creates
8686

8787
By bringing multiple IP addresses or prefixes, you are able to define multiple backing services when defining the IP address behind a single load balancer object. The egress endpoint of specific nodes will depend on what service they are associated with.
8888

89-
> [!IMPORTANT]
90-
> You must use *Standard* SKU public IPs for egress with your *Standard* SKU your load balancer. You can verify the SKU of your public IPs using the [az network public-ip show][az-network-public-ip-show] command:
91-
>
92-
> ```azurecli-interactive
93-
> az network public-ip show --resource-group myResourceGroup --name myPublicIP --query sku.name -o tsv
94-
> ```
89+
### Pre-requisites to bring-your-own IP addresses or IP prefixes
90+
1. You must use *Standard* SKU public IPs for egress with your *Standard* SKU your load balancer. You can verify the SKU of your public IPs using the [az network public-ip show][az-network-public-ip-show] command:
91+
92+
```azurecli-interactive
93+
az network public-ip show --resource-group myResourceGroup --name myPublicIP --query sku.name -o tsv
94+
```
95+
1. The public IPs and IP prefixes must be in the same region and part of the same subscription as your AKS cluster.
96+
1. The public IPs and IP prefixes cannot be IPs created by AKS as a managed IP. Ensure any IPs specified as custom IPs were created manually and not be the AKS service.
97+
1. The public IPs and IP prefixes cannot be used by another resource or service.
98+
99+
### Define your own public IP or prefixes on an existing cluster
95100

96101
Use the [az network public-ip show][az-network-public-ip-show] command to list the IDs of your public IPs.
97102

@@ -129,9 +134,6 @@ az aks update \
129134
--load-balancer-outbound-ip-prefixes <publicIpPrefixId1>,<publicIpPrefixId2>
130135
```
131136

132-
> [!IMPORTANT]
133-
> The public IPs and IP prefixes must be in the same region and part of the same subscription as your AKS cluster.
134-
135137
### Define your own public IP or prefixes at cluster create time
136138

137139
You may wish to bring your own IP addresses or IP prefixes for egress at cluster creation time to support scenarios like whitelisting egress endpoints. Append the same parameters shown above to your cluster creation step to define your own public IPs and IP prefixes at the start of a cluster's lifecycle.

articles/aks/support-policies.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,6 @@ AKS isn't a completely managed cluster solution. Some components, such as worker
3636

3737
The services are *managed* in the sense that Microsoft and the AKS team deploys, operates, and is responsible for service availability and functionality. Customers can't alter these managed components. Microsoft limits customization to ensure a consistent and scalable user experience. For a fully customizable solution, see [AKS Engine](https://github.com/Azure/aks-engine).
3838

39-
> [!NOTE]
40-
> AKS worker nodes appear in the Azure portal as regular Azure IaaS resources. But these virtual machines are deployed into a custom Azure resource group (prefixed with MC\\*). It's possible to change AKS worker nodes. For example, you can use Secure Shell (SSH) to change AKS worker nodes the way you change normal virtual machines (you can't, however, change the base OS image, and changes might not persist through an update or reboot), and you can attach other Azure resources to AKS worker nodes. But when you make changes *out of band management and customization,* the AKS cluster can become unsupportable. Avoid changing worker nodes unless Microsoft Support directs you to make changes.
41-
42-
Issuing unsupported operations as defined above, such as out of band deallocation of all agent nodes, renders the cluster unsupported. AKS reserves the right to archive control planes that have been configured out of support guidelines for extended periods equal to and beyond 30 days. AKS maintains backups of cluster etcd metadata and can readily reallocate the cluster. This reallocation can be initiated by any PUT operation bringing the cluster back into support, such as an upgrade or scale to active agent nodes.
43-
4439
## Shared responsibility
4540

4641
When a cluster is created, the customer defines the Kubernetes worker nodes that AKS creates. Customer workloads are executed on these nodes. Customers own and can view or modify the worker nodes.
@@ -101,8 +96,22 @@ Microsoft doesn't automatically reboot worker nodes to apply OS-level patches. A
10196

10297
Customers are responsible for executing Kubernetes upgrades. They can execute upgrades through the Azure control panel or the Azure CLI. This applies for updates that contain security or functionality improvements to Kubernetes.
10398

99+
#### User customization of worker nodes
100+
> [!NOTE]
101+
> AKS worker nodes appear in the Azure portal as regular Azure IaaS resources. But these virtual machines are deployed into a custom Azure resource group (prefixed with MC\\*). It is possible to augment AKS worker nodes from their base configurations. For example, you can use Secure Shell (SSH) to change AKS worker nodes the way you change normal virtual machines. You cannot, however, change the base OS image. Any custom changes may not persist through an upgrade, scale, update or reboot. **However**, making changes *out of band and out of scope of the AKS API* leads to the AKS cluster becoming unsupported. Avoid changing worker nodes unless Microsoft Support directs you to make changes.
102+
103+
Issuing unsupported operations as defined above, such as out of band deallocation of all agent nodes, renders the cluster unsupported. AKS reserves the right to archive control planes that have been configured out of support guidelines for extended periods equal to and beyond 30 days. AKS maintains backups of cluster etcd metadata and can readily reallocate the cluster. This reallocation can be initiated by any PUT operation bringing the cluster back into support, such as an upgrade or scale to active agent nodes.
104+
105+
AKS manages the lifecycle and operations of worker nodes on behalf of customers - modifying the IaaS resources associated with the worker nodes is **not supported**. An example of an unsupported operation is customizing a node pool VM Scale Set by manually changing configurations on the VMSS through the VMSS portal or VMSS API.
106+
107+
For workload specific configurations or packages, AKS recommends using [Kubernetes daemonsets](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/).
108+
109+
Using Kubernetes privileged daemonsets and init containers enables customers to tune/modify or install 3rd party software on cluster worker nodes. Examples of such customizations include adding custom security scanning software or updating sysctl settings.
110+
111+
While this is a recommended path if the above requirements apply, AKS engineering and support can not assist in troubleshooting or diagnosis of broken/nonfunctional modifications or those that render the node unavailable due to a customer deployed daemonset.
112+
104113
> [!NOTE]
105-
> Because AKS is a *managed service*, its end goals include removing responsibility for patches, updates, and log collection to make the service management more complete and hands-off. As the service's capacity for end-to-end management increases, future releases might omit some functions (for example, node rebooting and automatic patching).
114+
> AKS as a *managed service* has end goals such as removing responsibility for patches, updates, and log collection to make the service management more complete and hands-off. As the service's capacity for end-to-end management increases, future releases might omit some functions (for example, node rebooting and automatic patching).
106115
107116
### Security issues and patching
108117

0 commit comments

Comments
 (0)