Skip to content

[EKS Controller] could not change the upgrade policy for cluster #2441

@gecube

Description

@gecube

I created cluster with the default properties like:

apiVersion: eks.services.k8s.aws/v1alpha1
kind: Cluster
metadata:
  name: production-new
  namespace: infra-production
spec:
  kubernetesNetworkConfig:
    ipFamily: ipv4
    serviceIPv4CIDR: 172.20.0.0/16
  name: production-new
  roleARN: arn:aws:iam::*****:role/eks-production-role
  logging:
    clusterLogging:
      - enabled: true
        types:
          - api
          - audit
          - authenticator
          - controllerManager
          - scheduler
  resourcesVPCConfig:
    endpointPrivateAccess: true
    endpointPublicAccess: false
    subnetIDs:
      - subnet-0eeac56411254cbc6
      - subnet-0f06902b47c880118
      - subnet-0c72af713be937dcc
  tags:
    Name: production-new

I found that it is created by default with Extended support policy:

Image

I want to change the upgrade policy to standard so I am changing the manifest and apply. I am getting:

apiVersion: eks.services.k8s.aws/v1alpha1
kind: Cluster
metadata:
  resourceVersion: '936721627'
  name: production-new
  uid: db483972-9b13-4c93-a18a-252a2ec5f64c
  creationTimestamp: '2025-04-12T05:21:07Z'
  generation: 4
...
  namespace: infra-production
  finalizers:
    - finalizers.eks.services.k8s.aws/Cluster
  labels:
    kustomize.toolkit.fluxcd.io/name: infra-management
    kustomize.toolkit.fluxcd.io/namespace: flux-system
spec:
  resourcesVPCConfig:
    endpointPrivateAccess: true
    endpointPublicAccess: false
    subnetIDs:
      - subnet-0eeac56411254cbc6
      - subnet-0f06902b47c880118
      - subnet-0c72af713be937dcc
  accessConfig:
    authenticationMode: CONFIG_MAP
    bootstrapClusterCreatorAdminPermissions: true
  roleARN: 'arn:aws:iam::*****:role/eks-production-role'
  kubernetesNetworkConfig:
    elasticLoadBalancing:
      enabled: false
    ipFamily: ipv4
    serviceIPv4CIDR: 172.20.0.0/16
  name: production-new
  upgradePolicy:
    supportType: STANDARD
  version: '1.32'
  tags:
    Name: production-new
  logging:
    clusterLogging:
      - enabled: true
        types:
          - api
          - audit
          - authenticator
          - controllerManager
          - scheduler
status:
  platformVersion: eks.6
  ackResourceMetadata:
    arn: 'arn:aws:eks:eu-west-2:*****:cluster/production-new'
    ownerAccountID: '****'
    region: eu-west-2
  certificateAuthority:
    data: *****
  status: ACTIVE
  endpoint: 'https://******.eu-west-2.eks.amazonaws.com'
  conditions:
    - lastTransitionTime: '2025-04-29T08:13:37Z'
      status: 'True'
      type: ACK.ResourceSynced
    - message: 'InvalidParameterException: Cluster is already at the desired configuration with endpointPrivateAccess: true , endpointPublicAccess: false, and Public Endpoint Restrictions: [0.0.0.0/0]'
      status: 'True'
      type: ACK.Terminal
  createdAt: '2025-04-12T05:28:04Z'
  health: {}
  identity:
    oidc:
      issuer: 'https://oidc.eks.eu-west-2.amazonaws.com/id/*****'

so effectively upgrade policy is not changed

Metadata

Metadata

Assignees

Labels

kind/bugCategorizes issue or PR as related to a bug.service/eksIndicates issues or PRs that are related to eks-controller.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions