Skip to content

Conversation

@fiunchinho
Copy link
Member

@fiunchinho fiunchinho commented Oct 27, 2025

What this PR does / why we need it

Towards https://github.com/giantswarm/giantswarm/issues/34549

This change will roll nodes, because we are changing the instance profile for the control plane instances, and the node pool ASGs (CAPA) / EC2 instances (karpenter).

In this PR we are changing capa-iam-operator to skip creating the IAM Roles for the workers and the control plane nodes, when the cluster is using GiantSwarm Release v34 or above. We plan to release the changes in this PR with that Release v34.

Once clusters are using v35, we can remove the old IAM Roles, policies and instance profiles that were created by capa-iam-operator.

Checklist

  • Updated CHANGELOG.md.

Trigger E2E tests

/run cluster-test-suites

@fiunchinho fiunchinho self-assigned this Oct 27, 2025
@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 27, 2025

Note

As this is a draft PR no triggers from the PR body will be handled.

If you'd like to trigger them while draft please add them as a PR comment.

@fiunchinho fiunchinho force-pushed the iam-workers-crossplane branch from ab253d4 to f9ace2f Compare October 28, 2025 09:28
@fiunchinho
Copy link
Member Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 28, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesxmf46
Commit SHA f9ace2f
Result Failed ❌

❌ Failed test suites

CAPA Standard Suite ❌

Test Name Status Duration
BeforeSuite 25m5s
AfterSuite 1m51s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/standard

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 28, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suiteslp87d
Commit SHA f9ace2f
Result Failed ❌

❌ Failed test suites

CAPA Standard Suite ❌

Test Name Status Duration
BeforeSuite 25m5s
AfterSuite 1m51s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/standard

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 28, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suites8qpfh
Commit SHA f9ace2f
Result Failed ❌

❌ Failed test suites

CAPA Standard Suite ❌

Test Name Status Duration
BeforeSuite 25m4s
AfterSuite 1m51s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho fiunchinho force-pushed the iam-workers-crossplane branch from f9ace2f to 09a5a37 Compare October 28, 2025 11:46
@fiunchinho fiunchinho force-pushed the iam-workers-crossplane branch from 09a5a37 to 07973eb Compare October 28, 2025 11:56
@fiunchinho fiunchinho force-pushed the iam-workers-crossplane branch 2 times, most recently from bfb1249 to a56addb Compare October 28, 2025 23:55
@fiunchinho fiunchinho force-pushed the iam-workers-crossplane branch from a56addb to 2e07c33 Compare October 29, 2025 08:09
@github-actions
Copy link
Contributor

There were differences in the rendered Helm template, please check! ⚠️

Output
=== Differences when rendered with values file helm/cluster-aws/ci/test-auditd-values.yaml ===

(file level)
  - one document removed:
    ---
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      name: test-wc-minimal-control-plane-adc35e1c
      namespace: org-giantswarm
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc-minimal
            giantswarm.io/cluster: test-wc-minimal
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 27.0.0-alpha.1
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - type: gp3
            deviceName: /dev/xvdc
            encrypted: true
            size: 50
          - type: gp3
            deviceName: /dev/xvdd
            encrypted: true
            size: 40
          - type: gp3
            deviceName: /dev/xvde
            encrypted: true
            size: 15
          rootVolume:
            type: gp3
            size: 8
          iamInstanceProfile: control-plane-test-wc-minimal
          instanceMetadataOptions:
            httpPutResponseHopLimit: 3
            httpTokens: required
          sshKeyName:
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc-minimal"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
  
    ---
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-minimal-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-minimal-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
      name: test-wc-minimal-control-plane-4850ad36
      namespace: org-giantswarm
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc-minimal
            giantswarm.io/cluster: test-wc-minimal
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 27.0.0-alpha.1
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - deviceName: /dev/xvdc
            encrypted: true
            size: 50
            type: gp3
          - deviceName: /dev/xvdd
            encrypted: true
            size: 40
            type: gp3
          - deviceName: /dev/xvde
            encrypted: true
            size: 15
            type: gp3
          rootVolume:
            size: 8
            type: gp3
          iamInstanceProfile: test-wc-minimal-control-plane
          instanceMetadataOptions:
            httpPutResponseHopLimit: 3
            httpTokens: required
          sshKeyName: 
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc-minimal"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-wc-minimal-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-wc-minimal-control-plane
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-wc-minimal-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-wc-minimal-worker
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-wc-minimal-control-plane
      labels:
        cluster.x-k8s.io/cluster-name: test-wc-minimal
    spec:
      forProvider:
        roleRef:
          name: test-wc-minimal-control-plane
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": "elasticloadbalancing:*",
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeAutoScalingInstances",
                  "autoscaling:DescribeTags",
                  "autoscaling:DescribeLaunchConfigurations",
                  "ec2:DescribeLaunchTemplateVersions"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ecr:GetAuthorizationToken",
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:DescribeRepositories",
                  "ecr:ListImages",
                  "ecr:BatchGetImage"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ec2:AssignPrivateIpAddresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:CreateNetworkInterface",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:DescribeInstanceTypes",
                  "ec2:DescribeTags",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DetachNetworkInterface",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:UnassignPrivateIpAddresses"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeLaunchConfigurations",
                  "autoscaling:DescribeTags",
                  "ec2:DescribeAvailabilityZones",
                  "ec2:DescribeInstances",
                  "ec2:DescribeImages",
                  "ec2:DescribeRegions",
                  "ec2:DescribeRouteTables",
                  "ec2:DescribeSecurityGroups",
                  "ec2:DescribeSubnets",
                  "ec2:DescribeVolumes",
                  "ec2:CreateSecurityGroup",
                  "ec2:CreateTags",
                  "ec2:CreateVolume",
                  "ec2:ModifyInstanceAttribute",
                  "ec2:ModifyVolume",
                  "ec2:AttachVolume",
                  "ec2:DescribeVolumesModifications",
                  "ec2:AuthorizeSecurityGroupIngress",
                  "ec2:CreateRoute",
                  "ec2:DeleteRoute",
                  "ec2:DeleteSecurityGroup",
                  "ec2:DeleteVolume",
                  "ec2:DetachVolume",
                  "ec2:RevokeSecurityGroupIngress",
                  "ec2:DescribeVpcs",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:AttachLoadBalancerToSubnets",
                  "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancerPolicy",
                  "elasticloadbalancing:CreateLoadBalancerListeners",
                  "elasticloadbalancing:ConfigureHealthCheck",
                  "elasticloadbalancing:DeleteLoadBalancer",
                  "elasticloadbalancing:DeleteLoadBalancerListeners",
                  "elasticloadbalancing:DescribeLoadBalancers",
                  "elasticloadbalancing:DescribeLoadBalancerAttributes",
                  "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                  "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                  "elasticloadbalancing:ModifyLoadBalancerAttributes",
                  "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                  "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:CreateListener",
                  "elasticloadbalancing:CreateTargetGroup",
                  "elasticloadbalancing:DeleteListener",
                  "elasticloadbalancing:DeleteTargetGroup",
                  "elasticloadbalancing:DescribeListeners",
                  "elasticloadbalancing:DescribeLoadBalancerPolicies",
                  "elasticloadbalancing:DescribeTargetGroups",
                  "elasticloadbalancing:DescribeTargetHealth",
                  "elasticloadbalancing:ModifyListener",
                  "elasticloadbalancing:ModifyTargetGroup",
                  "elasticloadbalancing:RegisterTargets",
                  "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                  "iam:CreateServiceLinkedRole",
                  "kms:DescribeKey"
                ],
                "Resource": [
                  "*"
                ],
                "Effect": "Allow"
              },
              {
                "Action": [
                  "secretsmanager:GetSecretValue",
                  "secretsmanager:DeleteSecret"
                ],
                "Resource": "arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-wc-minimal-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-minimal-worker
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": [
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:BatchGetImage",
                  "ecr:DescribeRepositories",
                  "ecr:GetAuthorizationToken",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:ListImages"
                ],
                "Resource": "*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-wc-minimal-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-minimal-control-plane
        policyArnRef:
          name: test-wc-minimal-control-plane
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-wc-minimal-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 27.0.0-alpha.1
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-minimal-worker
        policyArnRef:
          name: test-wc-minimal-worker
      providerConfigRef:
        name: test-wc-minimal
    
  

/spec/s3Bucket/controlPlaneIAMInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-wc-minimal)
  ± value change
    - control-plane-test-wc-minimal
    + test-wc-minimal-control-plane

/spec/s3Bucket/nodesIAMInstanceProfiles/0  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-wc-minimal)
  ± value change
    - nodes-pool0-test-wc-minimal
    + test-wc-minimal-worker

/metadata/labels  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-wc-minimal-pool0)
  - one map entry removed:
    alpha.aws.giantswarm.io/reduced-instance-permissions-workers: "true"

/spec/awsLaunchTemplate/iamInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-wc-minimal-pool0)
  ± value change
    - nodes-pool0-test-wc-minimal
    + test-wc-minimal-worker

/spec/machineTemplate/infrastructureRef/name  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc-minimal)
  ± value change
    - test-wc-minimal-control-plane-adc35e1c
    + test-wc-minimal-control-plane-4850ad36



=== Differences when rendered with values file helm/cluster-aws/ci/test-eni-mode-values.yaml ===

(file level)
  - one document removed:
    ---
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      name: test-wc-control-plane-3031689d
      namespace: org-giantswarm
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc
            giantswarm.io/cluster: test-wc
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 29.1.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - type: gp3
            deviceName: /dev/xvdc
            encrypted: true
            size: 50
          - type: gp3
            deviceName: /dev/xvdd
            encrypted: true
            size: 40
          - type: gp3
            deviceName: /dev/xvde
            encrypted: true
            size: 15
          rootVolume:
            type: gp3
            size: 8
          iamInstanceProfile: control-plane-test-wc
          instanceMetadataOptions:
            httpPutResponseHopLimit: 2
            httpTokens: required
          sshKeyName:
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
  
    ---
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
      name: test-wc-control-plane-bdc42d68
      namespace: org-giantswarm
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc
            giantswarm.io/cluster: test-wc
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 29.1.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - deviceName: /dev/xvdc
            encrypted: true
            size: 50
            type: gp3
          - deviceName: /dev/xvdd
            encrypted: true
            size: 40
            type: gp3
          - deviceName: /dev/xvde
            encrypted: true
            size: 15
            type: gp3
          rootVolume:
            size: 8
            type: gp3
          iamInstanceProfile: test-wc-control-plane
          instanceMetadataOptions:
            httpPutResponseHopLimit: 2
            httpTokens: required
          sshKeyName: 
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-wc-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-wc-control-plane
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-wc-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-wc-worker
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-wc-control-plane
      labels:
        cluster.x-k8s.io/cluster-name: test-wc
    spec:
      forProvider:
        roleRef:
          name: test-wc-control-plane
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": "elasticloadbalancing:*",
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeAutoScalingInstances",
                  "autoscaling:DescribeTags",
                  "autoscaling:DescribeLaunchConfigurations",
                  "ec2:DescribeLaunchTemplateVersions"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ecr:GetAuthorizationToken",
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:DescribeRepositories",
                  "ecr:ListImages",
                  "ecr:BatchGetImage"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ec2:AssignPrivateIpAddresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:CreateNetworkInterface",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:DescribeInstanceTypes",
                  "ec2:DescribeTags",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DetachNetworkInterface",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:UnassignPrivateIpAddresses"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeLaunchConfigurations",
                  "autoscaling:DescribeTags",
                  "ec2:DescribeAvailabilityZones",
                  "ec2:DescribeInstances",
                  "ec2:DescribeImages",
                  "ec2:DescribeRegions",
                  "ec2:DescribeRouteTables",
                  "ec2:DescribeSecurityGroups",
                  "ec2:DescribeSubnets",
                  "ec2:DescribeVolumes",
                  "ec2:CreateSecurityGroup",
                  "ec2:CreateTags",
                  "ec2:CreateVolume",
                  "ec2:ModifyInstanceAttribute",
                  "ec2:ModifyVolume",
                  "ec2:AttachVolume",
                  "ec2:DescribeVolumesModifications",
                  "ec2:AuthorizeSecurityGroupIngress",
                  "ec2:CreateRoute",
                  "ec2:DeleteRoute",
                  "ec2:DeleteSecurityGroup",
                  "ec2:DeleteVolume",
                  "ec2:DetachVolume",
                  "ec2:RevokeSecurityGroupIngress",
                  "ec2:DescribeVpcs",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:AttachLoadBalancerToSubnets",
                  "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancerPolicy",
                  "elasticloadbalancing:CreateLoadBalancerListeners",
                  "elasticloadbalancing:ConfigureHealthCheck",
                  "elasticloadbalancing:DeleteLoadBalancer",
                  "elasticloadbalancing:DeleteLoadBalancerListeners",
                  "elasticloadbalancing:DescribeLoadBalancers",
                  "elasticloadbalancing:DescribeLoadBalancerAttributes",
                  "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                  "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                  "elasticloadbalancing:ModifyLoadBalancerAttributes",
                  "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                  "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:CreateListener",
                  "elasticloadbalancing:CreateTargetGroup",
                  "elasticloadbalancing:DeleteListener",
                  "elasticloadbalancing:DeleteTargetGroup",
                  "elasticloadbalancing:DescribeListeners",
                  "elasticloadbalancing:DescribeLoadBalancerPolicies",
                  "elasticloadbalancing:DescribeTargetGroups",
                  "elasticloadbalancing:DescribeTargetHealth",
                  "elasticloadbalancing:ModifyListener",
                  "elasticloadbalancing:ModifyTargetGroup",
                  "elasticloadbalancing:RegisterTargets",
                  "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                  "iam:CreateServiceLinkedRole",
                  "kms:DescribeKey"
                ],
                "Resource": [
                  "*"
                ],
                "Effect": "Allow"
              },
              {
                "Action": [
                  "secretsmanager:GetSecretValue",
                  "secretsmanager:DeleteSecret"
                ],
                "Resource": "arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-wc-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-worker
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": [
                  "ec2:AssignPrivateIpAddresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:CreateNetworkInterface",
                  "ec2:CreateTags",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:DescribeInstanceTypes",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DescribeRouteTables",
                  "ec2:DescribeSecurityGroups",
                  "ec2:DescribeSubnets",
                  "ec2:DescribeTags",
                  "ec2:DescribeVpcs",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:UnassignPrivateIpAddresses"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:BatchGetImage",
                  "ecr:DescribeRepositories",
                  "ecr:GetAuthorizationToken",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:ListImages"
                ],
                "Resource": "*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-wc-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-control-plane
        policyArnRef:
          name: test-wc-control-plane
      providerConfigRef:
        name: test-wc
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-wc-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc
        giantswarm.io/cluster: test-wc
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-wc-worker
        policyArnRef:
          name: test-wc-worker
      providerConfigRef:
        name: test-wc
    
  

/spec/s3Bucket/controlPlaneIAMInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-wc)
  ± value change
    - control-plane-test-wc
    + test-wc-control-plane

/spec/s3Bucket/nodesIAMInstanceProfiles/0  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-wc)
  ± value change
    - nodes-pool0-test-wc
    + test-wc-worker

/metadata/labels  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-wc-pool0)
  - one map entry removed:
    alpha.aws.giantswarm.io/reduced-instance-permissions-workers: "true"

/spec/awsLaunchTemplate/iamInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-wc-pool0)
  ± value change
    - nodes-pool0-test-wc
    + test-wc-worker

/spec/machineTemplate/infrastructureRef/name  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-wc)
  ± value change
    - test-wc-control-plane-3031689d
    + test-wc-control-plane-bdc42d68



=== Differences when rendered with values file helm/cluster-aws/ci/test-hop-count-tuning-values.yaml ===

(file level)
  - one document removed:
    ---
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      name: test-hop-count-tuning-control-plane-f90de22d
      namespace: org-giantswarm
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-hop-count-tuning
            giantswarm.io/cluster: test-hop-count-tuning
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 31.0.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - type: gp3
            deviceName: /dev/xvdc
            encrypted: true
            size: 50
          - type: gp3
            deviceName: /dev/xvdd
            encrypted: true
            size: 40
          - type: gp3
            deviceName: /dev/xvde
            encrypted: true
            size: 15
          rootVolume:
            type: gp3
            size: 8
          iamInstanceProfile: control-plane-test-hop-count-tuning
          instanceMetadataOptions:
            httpPutResponseHopLimit: 2
            httpTokens: required
          sshKeyName:
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-hop-count-tuning"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
  
    ---
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-hop-count-tuning-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-hop-count-tuning
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-hop-count-tuning-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-hop-count-tuning
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
      name: test-hop-count-tuning-control-plane-9e983f8c
      namespace: org-giantswarm
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-hop-count-tuning
            giantswarm.io/cluster: test-hop-count-tuning
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 31.0.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - deviceName: /dev/xvdc
            encrypted: true
            size: 50
            type: gp3
          - deviceName: /dev/xvdd
            encrypted: true
            size: 40
            type: gp3
          - deviceName: /dev/xvde
            encrypted: true
            size: 15
            type: gp3
          rootVolume:
            size: 8
            type: gp3
          iamInstanceProfile: test-hop-count-tuning-control-plane
          instanceMetadataOptions:
            httpPutResponseHopLimit: 2
            httpTokens: required
          sshKeyName: 
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-hop-count-tuning"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-hop-count-tuning-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-hop-count-tuning-control-plane
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-hop-count-tuning
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: InstanceProfile
    metadata:
      name: test-hop-count-tuning-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        role: test-hop-count-tuning-worker
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-hop-count-tuning
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-hop-count-tuning-control-plane
      labels:
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
    spec:
      forProvider:
        roleRef:
          name: test-hop-count-tuning-control-plane
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": "elasticloadbalancing:*",
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeAutoScalingInstances",
                  "autoscaling:DescribeTags",
                  "autoscaling:DescribeLaunchConfigurations",
                  "ec2:DescribeLaunchTemplateVersions"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ecr:GetAuthorizationToken",
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:DescribeRepositories",
                  "ecr:ListImages",
                  "ecr:BatchGetImage"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "ec2:AssignPrivateIpAddresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:CreateNetworkInterface",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:DescribeInstanceTypes",
                  "ec2:DescribeTags",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DetachNetworkInterface",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:UnassignPrivateIpAddresses"
                ],
                "Resource": "*",
                "Effect": "Allow"
              },
              {
                "Action": [
                  "autoscaling:DescribeAutoScalingGroups",
                  "autoscaling:DescribeLaunchConfigurations",
                  "autoscaling:DescribeTags",
                  "ec2:DescribeAvailabilityZones",
                  "ec2:DescribeInstances",
                  "ec2:DescribeImages",
                  "ec2:DescribeRegions",
                  "ec2:DescribeRouteTables",
                  "ec2:DescribeSecurityGroups",
                  "ec2:DescribeSubnets",
                  "ec2:DescribeVolumes",
                  "ec2:CreateSecurityGroup",
                  "ec2:CreateTags",
                  "ec2:CreateVolume",
                  "ec2:ModifyInstanceAttribute",
                  "ec2:ModifyVolume",
                  "ec2:AttachVolume",
                  "ec2:DescribeVolumesModifications",
                  "ec2:AuthorizeSecurityGroupIngress",
                  "ec2:CreateRoute",
                  "ec2:DeleteRoute",
                  "ec2:DeleteSecurityGroup",
                  "ec2:DeleteVolume",
                  "ec2:DetachVolume",
                  "ec2:RevokeSecurityGroupIngress",
                  "ec2:DescribeVpcs",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:AttachLoadBalancerToSubnets",
                  "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancer",
                  "elasticloadbalancing:CreateLoadBalancerPolicy",
                  "elasticloadbalancing:CreateLoadBalancerListeners",
                  "elasticloadbalancing:ConfigureHealthCheck",
                  "elasticloadbalancing:DeleteLoadBalancer",
                  "elasticloadbalancing:DeleteLoadBalancerListeners",
                  "elasticloadbalancing:DescribeLoadBalancers",
                  "elasticloadbalancing:DescribeLoadBalancerAttributes",
                  "elasticloadbalancing:DetachLoadBalancerFromSubnets",
                  "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
                  "elasticloadbalancing:ModifyLoadBalancerAttributes",
                  "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                  "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer",
                  "elasticloadbalancing:AddTags",
                  "elasticloadbalancing:CreateListener",
                  "elasticloadbalancing:CreateTargetGroup",
                  "elasticloadbalancing:DeleteListener",
                  "elasticloadbalancing:DeleteTargetGroup",
                  "elasticloadbalancing:DescribeListeners",
                  "elasticloadbalancing:DescribeLoadBalancerPolicies",
                  "elasticloadbalancing:DescribeTargetGroups",
                  "elasticloadbalancing:DescribeTargetHealth",
                  "elasticloadbalancing:ModifyListener",
                  "elasticloadbalancing:ModifyTargetGroup",
                  "elasticloadbalancing:RegisterTargets",
                  "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                  "iam:CreateServiceLinkedRole",
                  "kms:DescribeKey"
                ],
                "Resource": [
                  "*"
                ],
                "Effect": "Allow"
              },
              {
                "Action": [
                  "secretsmanager:GetSecretValue",
                  "secretsmanager:DeleteSecret"
                ],
                "Resource": "arn:*:secretsmanager:*:*:secret:aws.cluster.x-k8s.io/*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicy
    metadata:
      name: test-hop-count-tuning-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-hop-count-tuning-worker
        policy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Action": [
                  "ecr:BatchCheckLayerAvailability",
                  "ecr:BatchGetImage",
                  "ecr:DescribeRepositories",
                  "ecr:GetAuthorizationToken",
                  "ecr:GetDownloadUrlForLayer",
                  "ecr:GetRepositoryPolicy",
                  "ecr:ListImages"
                ],
                "Resource": "*",
                "Effect": "Allow"
              }
            ]
          }
          
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-hop-count-tuning-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-hop-count-tuning-control-plane
        policyArnRef:
          name: test-hop-count-tuning-control-plane
      providerConfigRef:
        name: test-hop-count-tuning
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: RolePolicyAttachment
    metadata:
      name: test-hop-count-tuning-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-hop-count-tuning
        giantswarm.io/cluster: test-hop-count-tuning
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 31.0.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        roleRef:
          name: test-hop-count-tuning-worker
        policyArnRef:
          name: test-hop-count-tuning-worker
      providerConfigRef:
        name: test-hop-count-tuning
    
  

/spec/s3Bucket/controlPlaneIAMInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-hop-count-tuning)
  ± value change
    - control-plane-test-hop-count-tuning
    + test-hop-count-tuning-control-plane

/spec/s3Bucket/nodesIAMInstanceProfiles/0  (infrastructure.cluster.x-k8s.io/v1beta2/AWSCluster/org-giantswarm/test-hop-count-tuning)
  ± value change
    - nodes-pool0-test-hop-count-tuning
    + test-hop-count-tuning-worker

/metadata/labels  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-hop-count-tuning-pool0)
  - one map entry removed:
    alpha.aws.giantswarm.io/reduced-instance-permissions-workers: "true"

/spec/awsLaunchTemplate/iamInstanceProfile  (infrastructure.cluster.x-k8s.io/v1beta2/AWSMachinePool/org-giantswarm/test-hop-count-tuning-pool0)
  ± value change
    - nodes-pool0-test-hop-count-tuning
    + test-hop-count-tuning-worker

/spec/machineTemplate/infrastructureRef/name  (controlplane.cluster.x-k8s.io/v1beta1/KubeadmControlPlane/org-giantswarm/test-hop-count-tuning)
  ± value change
    - test-hop-count-tuning-control-plane-f90de22d
    + test-hop-count-tuning-control-plane-9e983f8c



=== Differences when rendered with values file helm/cluster-aws/ci/test-irsa-crossplane-values.yaml ===

(file level)
  - one document removed:
    ---
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      name: test-wc-minimal-control-plane-adc35e1c
      namespace: org-giantswarm
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc-minimal
            giantswarm.io/cluster: test-wc-minimal
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 29.1.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - type: gp3
            deviceName: /dev/xvdc
            encrypted: true
            size: 50
          - type: gp3
            deviceName: /dev/xvdd
            encrypted: true
            size: 40
          - type: gp3
            deviceName: /dev/xvde
            encrypted: true
            size: 15
          rootVolume:
            type: gp3
            size: 8
          iamInstanceProfile: control-plane-test-wc-minimal
          instanceMetadataOptions:
            httpPutResponseHopLimit: 3
            httpTokens: required
          sshKeyName:
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc-minimal"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws/role"
              values:
              - private
  
    ---
    # Source: cluster-aws/templates/crossplane-iam-role-control-plane.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-minimal-control-plane
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/crossplane-iam-role-worker.yaml
    apiVersion: iam.aws.upbound.io/v1beta1
    kind: Role
    metadata:
      name: test-wc-minimal-worker
      labels:
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
    spec:
      forProvider:
        assumeRolePolicy: |
          {
            "Version": "2012-10-17",
            "Statement": [
              {
                "Effect": "Allow",
                "Principal": {
                  "Service": "ec2.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
              }
            ]
          }
          
        tags:
          managed-by: cluster-aws
          giantswarm.io/cluster: test-wc-minimal
          giantswarm.io/installation: test
      providerConfigRef:
        name: test-wc-minimal
    # Source: cluster-aws/templates/list.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta2
    kind: AWSMachineTemplate
    metadata:
      labels:
        cluster.x-k8s.io/role: control-plane
        app: cluster-aws
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: test-wc-minimal
        giantswarm.io/cluster: test-wc-minimal
        giantswarm.io/organization: test
        cluster.x-k8s.io/watch-filter: capi
        helm.sh/chart: cluster-aws-6.3.0
        application.giantswarm.io/team: phoenix
        release.giantswarm.io/version: 29.1.0
        app.kubernetes.io/version: 6.3.0
      name: test-wc-minimal-control-plane-4850ad36
      namespace: org-giantswarm
    spec:
      template:
        metadata:
          labels:
            cluster.x-k8s.io/role: control-plane
            app: cluster-aws
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: test-wc-minimal
            giantswarm.io/cluster: test-wc-minimal
            giantswarm.io/organization: test
            cluster.x-k8s.io/watch-filter: capi
            helm.sh/chart: cluster-aws-6.3.0
            application.giantswarm.io/team: phoenix
            release.giantswarm.io/version: 29.1.0
        spec:
          imageLookupBaseOS: N/A
          imageLookupFormat: flatcar-stable-N/A-kube-N/A-tooling-N/A-gs
          imageLookupOrg: 706635527432
          cloudInit: {}
          instanceType: r6i.xlarge
          nonRootVolumes:
          - deviceName: /dev/xvdc
            encrypted: true
            size: 50
            type: gp3
          - deviceName: /dev/xvdd
            encrypted: true
            size: 40
            type: gp3
          - deviceName: /dev/xvde
            encrypted: true
            size: 15
            type: gp3
          rootVolume:
            size: 8
            type: gp3
          iamInstanceProfile: test-wc-minimal-control-plane
          instanceMetadataOptions:
            httpPutResponseHopLimit: 3
            httpTokens: required
          sshKeyName: 
          subnet:
            filters:
            - name: "tag:kubernetes.io/cluster/test-wc-minimal"
              values:
              - shared
              - owned
            - name: "tag:sigs.k8s.io/cluster-api-provider-aws...*[Comment body truncated]*

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china
,./providers/capa/karpenter,./providers/capa/private,,./providers/capa/cilium-eni-mode,,./providers/capa/upgrade

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 29, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitessp4n5
Commit SHA 0a5d219
Result Failed ❌

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/china

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 29, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suites7qwhd
Commit SHA 0a5d219
Result Succeeded ✅

✅ Passed test suites

CAPA China Suite ✅

Test Name Status Duration
BeforeSuite 15m54s
It all HelmReleases are deployed without issues 10m28s
It all default apps are deployed without issues 14s
It all observability-bundle apps are deployed without issues 3s
It all security-bundle apps are deployed without issues 2s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m6s
It has all its Deployments Ready (means all replicas are running) 2m27s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 22s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 1s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 1s
It sets up the api DNS records 1s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 57s
It cluster wildcard ingress DNS must be resolvable 55s
It should deploy the hello-world app 4s
It ingress resource has load balancer in status 0s
It should have a ready Certificate generated 31s
It hello world app responds successfully 6s
It uninstall apps 2s
It creates test pod 1m5s
It ensure key metrics are available on mimir 50s
It clean up test pod 33s
It scales node by creating anti-affinity pods 11m28s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 3m12s
It deletes all resources correct 21s
It cluster is registered 0s
AfterSuite 10m58s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 29, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suites24kqv
Commit SHA 0a5d219
Result Failed ❌

✅ Passed test suites

CAPA Cilium ENI Mode Suite ✅

Test Name Status Duration
BeforeSuite 14m12s
It all HelmReleases are deployed without issues 1m0s
It all default apps are deployed without issues 2m16s
It all observability-bundle apps are deployed without issues 1s
It all security-bundle apps are deployed without issues 1s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m5s
It has all its Deployments Ready (means all replicas are running) 2m35s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 11s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 16s
It cluster wildcard ingress DNS must be resolvable 41s
It should deploy the hello-world app 11s
It ingress resource has load balancer in status 5s
It should have a ready Certificate generated 20s
It hello world app responds successfully 0s
It uninstall apps 1s
It creates test pod 5s
It ensure key metrics are available on mimir 9s
It clean up test pod 35s
It scales node by creating anti-affinity pods 1m8s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 11s
It deletes all resources correct 20s
It cluster is registered 0s
It should be able to pull an image from a private ECR registry 10s
It assigns IP addresses from secondary VPC CIDR to pods 0s
AfterSuite 10m45s

CAPA Private Suite ✅

Test Name Status Duration
BeforeSuite 9m7s
It all HelmReleases are deployed without issues 1m0s
It all default apps are deployed without issues 1m17s
It all observability-bundle apps are deployed without issues 1s
It all security-bundle apps are deployed without issues 0s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m5s
It has all its Deployments Ready (means all replicas are running) 33s
It has all its StatefulSets Ready (means all replicas are running) 33s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 11s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 15s
It cluster wildcard ingress DNS must be resolvable 2m13s
It should deploy the hello-world app 6s
It ingress resource has load balancer in status 40s
It should have a ready Certificate generated 50s
It hello world app responds successfully 0s
It uninstall apps 1s
It creates test pod 5s
It ensure key metrics are available on mimir 4s
It clean up test pod 35s
It scales node by creating anti-affinity pods 1m17s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 22s
It deletes all resources correct 20s
It cluster is registered 0s
It should be able to pull an image from a private ECR registry 10s
AfterSuite 15m24s

CAPA Standard Suite ✅

Test Name Status Duration
BeforeSuite 13m22s
It all HelmReleases are deployed without issues 1m0s
It all default apps are deployed without issues 9m0s
It all observability-bundle apps are deployed without issues 1s
It all security-bundle apps are deployed without issues 1s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m5s
It has all its Deployments Ready (means all replicas are running) 11s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 11s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 16s
It cluster wildcard ingress DNS must be resolvable 1m2s
It should deploy the hello-world app 11s
It ingress resource has load balancer in status 45s
It should have a ready Certificate generated 0s
It hello world app responds successfully 0s
It uninstall apps 1s
It creates test pod 5s
It ensure key metrics are available on mimir 8s
It clean up test pod 35s
It scales node by creating anti-affinity pods 1m39s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 22s
It deletes all resources correct 20s
It cluster is registered 0s
It should be able to pull an image from a private ECR registry 10s
AfterSuite 7m23s

❌ Failed test suites

CAPA China Suite ❌

Test Name Status Duration
BeforeSuite 15m47s
It all HelmReleases are deployed without issues 11m9s
It all default apps are deployed without issues 1m16s
It all observability-bundle apps are deployed without issues 2s
It all security-bundle apps are deployed without issues 2s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m7s
It has all its Deployments Ready (means all replicas are running) 1m19s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 11s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 1s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 1s
It sets up the api DNS records 1s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 51s
It cluster wildcard ingress DNS must be resolvable 57s
It should deploy the hello-world app 24s
It ingress resource has load balancer in status 37s
It should have a ready Certificate generated 15m0s
It hello world app responds successfully ⏭️ 0s
It uninstall apps ⏭️ 0s
It creates test pod 1m6s
It ensure key metrics are available on mimir 1m3s
It clean up test pod 33s
It scales node by creating anti-affinity pods 12m31s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 3m13s
It deletes all resources correct 21s
It cluster is registered 0s
AfterSuite 7m3s

CAPA Karpenter Suite ❌

Test Name Status Duration
BeforeSuite 13m11s
It all HelmReleases are deployed without issues 30m12s
It all default apps are deployed without issues 30m1s
It all observability-bundle apps are deployed without issues 8m1s
It all security-bundle apps are deployed without issues 0s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 15m0s
It has all its Deployments Ready (means all replicas are running) 15m8s
It has all its StatefulSets Ready (means all replicas are running) 15m0s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 15m6s
It doesn't have restarting pods 15m9s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 2m0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 3m0s
It should deploy ingress-nginx ⏭️ 0s
It cluster wildcard ingress DNS must be resolvable ⏭️ 0s
It should deploy the hello-world app ⏭️ 0s
It ingress resource has load balancer in status ⏭️ 0s
It should have a ready Certificate generated ⏭️ 0s
It hello world app responds successfully ⏭️ 0s
It uninstall apps ⏭️ 0s
It creates test pod 5s
It ensure key metrics are available on mimir 10m0s
It clean up test pod 35s
It scales node by creating anti-affinity pods 15m7s
It has a at least one storage class available 5m0s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 1m0s
It runs successfully 20m0s
It deletes all resources correct 15m0s
It cluster is registered 0s
AfterSuite 5m22s

CAPA Upgrade Suite ❌

Test Name Status Duration
BeforeSuite 12m11s
It has all the control-plane nodes running 2m35s
It has all the worker nodes running 1m5s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It all HelmReleases are deployed without issues 1m0s
It all default apps are deployed without issues 4s
It all observability-bundle apps are deployed without issues 1s
It all security-bundle apps are deployed without issues 0s
It has all its Deployments Ready (means all replicas are running) 11s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all of its Pods in the Running state 22s
It should apply new version successfully 1m21s
It successfully finishes control plane nodes rolling update if it is needed 30m0s
It detects if nodes were rolled ⏭️ 0s
It all HelmReleases are deployed without issues ⏭️ 0s
It all default apps are deployed without issues ⏭️ 0s
It all observability-bundle apps are deployed without issues ⏭️ 0s
It all security-bundle apps are deployed without issues ⏭️ 0s
It should be able to connect to the management cluster ⏭️ 0s
It should be able to connect to the workload cluster ⏭️ 0s
It has all the control-plane nodes running ⏭️ 0s
It has all the worker nodes running ⏭️ 0s
It has all its Deployments Ready (means all replicas are running) ⏭️ 0s
It has all its StatefulSets Ready (means all replicas are running) ⏭️ 0s
It has all its DaemonSets Ready (means all daemon pods are running) ⏭️ 0s
It has all its Jobs completed successfully ⏭️ 0s
It has all of its Pods in the Running state ⏭️ 0s
It doesn't have restarting pods ⏭️ 0s
It has Cluster Ready condition with Status='True' ⏭️ 0s
It has all machine pools ready and running ⏭️ 0s
It cert-manager default ClusterIssuers are present and ready ⏭️ 0s
It sets up the api DNS records ⏭️ 0s
It sets up the bastion DNS records ⏭️ 0s
It creates test pod ⏭️ 0s
It ensure key metrics are available on mimir ⏭️ 0s
It clean up test pod ⏭️ 0s
It cluster is registered ⏭️ 0s
It should have cert-manager and external-dns deployed ⏭️ 0s
It should deploy ingress-nginx ⏭️ 0s
It cluster wildcard ingress DNS must be resolvable ⏭️ 0s
It should deploy the hello-world app ⏭️ 0s
It ingress resource has load balancer in status ⏭️ 0s
It should have a ready Certificate generated ⏭️ 0s
It hello world app responds successfully ⏭️ 0s
It uninstall apps ⏭️ 0s
It scales node by creating anti-affinity pods ⏭️ 0s
It has a at least one storage class available ⏭️ 0s
It creates the new namespace for the test ⏭️ 0s
It creates the PVC ⏭️ 0s
It creates the pod using the PVC ⏭️ 0s
It binds the PVC ⏭️ 0s
It runs successfully ⏭️ 0s
It deletes all resources correct ⏭️ 0s
It should be able to pull an image from a private ECR registry ⏭️ 0s
AfterSuite 9m35s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/china,./providers/capa/karpenter

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesp6hhg
Commit SHA 0a5d219
Result Failed ❌

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/karpenter,./providers/capa/china

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesxlg7p
Commit SHA 0a5d219
Result Failed ❌

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/karpenter

1 similar comment
@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/karpenter

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/china

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesbv6c7
Commit SHA 0a5d219
Result Succeeded ✅

✅ Passed test suites

CAPA China Suite ✅

Test Name Status Duration
BeforeSuite 9m46s
It all HelmReleases are deployed without issues 1m0s
It all default apps are deployed without issues 2m31s
It all observability-bundle apps are deployed without issues 2s
It all security-bundle apps are deployed without issues 1s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 1s
It has all the control-plane nodes running 30s
It has all the worker nodes running 1m6s
It has all its Deployments Ready (means all replicas are running) 22s
It has all its StatefulSets Ready (means all replicas are running) 11s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 11s
It has all of its Pods in the Running state 22s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 1s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 1s
It sets up the api DNS records 1s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 0s
It should deploy ingress-nginx 18s
It cluster wildcard ingress DNS must be resolvable 53s
It should deploy the hello-world app 4s
It ingress resource has load balancer in status 1s
It should have a ready Certificate generated 31s
It hello world app responds successfully 1s
It uninstall apps 2s
It creates test pod 6s
It ensure key metrics are available on mimir 54s
It clean up test pod 33s
It scales node by creating anti-affinity pods 2m0s
It has a at least one storage class available 11s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 10s
It runs successfully 22s
It deletes all resources correct 21s
It cluster is registered 0s
AfterSuite 16m40s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@fiunchinho
Copy link
Member Author

/run cluster-test-suites TARGET_SUITES=./providers/capa/karpenter

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesd69sb
Commit SHA 0a5d219
Result Failed ❌

❌ Failed test suites

CAPA Karpenter Suite ❌

Test Name Status Duration
BeforeSuite 3s
AfterSuite 0s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

Oh No! 😱 At least one test suite has failed during the AfterSuite cleanup stage and might have left around some resources on the MC!

Be sure to check the full results in Tekton Dashboard to see which test suite has failed and then run the following on the associated MC to list all leftover resources:

PIPELINE_RUN="pr-cluster-aws-1555-cluster-test-suitesd69sb"

NAMES="$(kubectl api-resources --verbs list -o name | tr '\n' ,)"
kubectl get "${NAMES:0:${#NAMES}-1}" --show-kind --ignore-not-found -l cicd.giantswarm.io/pipelinerun=${PIPELINE_RUN} -A 2>/dev/null

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suitesmx72f
Commit SHA 0a5d219
Result Failed ❌

❌ Failed test suites

CAPA Karpenter Suite ❌

Test Name Status Duration
BeforeSuite 11m20s
It all HelmReleases are deployed without issues 30m11s
It all default apps are deployed without issues 30m1s
It all observability-bundle apps are deployed without issues 8m1s
It all security-bundle apps are deployed without issues 0s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 15m0s
It has all its Deployments Ready (means all replicas are running) 15m10s
It has all its StatefulSets Ready (means all replicas are running) 15m0s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 1m6s
It has all of its Pods in the Running state 15m9s
It doesn't have restarting pods 55s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 2m0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 3m0s
It should deploy ingress-nginx ⏭️ 0s
It cluster wildcard ingress DNS must be resolvable ⏭️ 0s
It should deploy the hello-world app ⏭️ 0s
It ingress resource has load balancer in status ⏭️ 0s
It should have a ready Certificate generated ⏭️ 0s
It hello world app responds successfully ⏭️ 0s
It uninstall apps ⏭️ 0s
It creates test pod 5s
It ensure key metrics are available on mimir 10m0s
It clean up test pod 35s
It scales node by creating anti-affinity pods 15m7s
It has a at least one storage class available 4m14s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 1m0s
It runs successfully 20m0s
It deletes all resources correct 15m0s
It cluster is registered 0s
AfterSuite 4m32s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

@tinkerers-ci
Copy link

tinkerers-ci bot commented Oct 30, 2025

cluster-test-suites

Run name pr-cluster-aws-1555-cluster-test-suiteskl6hd
Commit SHA 0a5d219
Result Failed ❌

❌ Failed test suites

CAPA Karpenter Suite ❌

Test Name Status Duration
BeforeSuite 10m59s
It all HelmReleases are deployed without issues 30m16s
It all default apps are deployed without issues 30m1s
It all observability-bundle apps are deployed without issues 8m1s
It all security-bundle apps are deployed without issues 0s
It should be able to connect to the management cluster 0s
It should be able to connect to the workload cluster 0s
It has all the control-plane nodes running 30s
It has all the worker nodes running 15m0s
It has all its Deployments Ready (means all replicas are running) 15m10s
It has all its StatefulSets Ready (means all replicas are running) 15m0s
It has all its DaemonSets Ready (means all daemon pods are running) 11s
It has all its Jobs completed successfully 2m35s
It has all of its Pods in the Running state 15m7s
It doesn't have restarting pods 15m8s
It has Cluster Ready condition with Status='True' 0s
It has all machine pools ready and running 30s
It cert-manager default ClusterIssuers are present and ready 2m0s
It sets up the api DNS records 0s
It sets up the bastion DNS records ⏭️ 0s
It should have cert-manager and external-dns deployed 3m0s
It should deploy ingress-nginx ⏭️ 0s
It cluster wildcard ingress DNS must be resolvable ⏭️ 0s
It should deploy the hello-world app ⏭️ 0s
It ingress resource has load balancer in status ⏭️ 0s
It should have a ready Certificate generated ⏭️ 0s
It hello world app responds successfully ⏭️ 0s
It uninstall apps ⏭️ 0s
It creates test pod 5s
It ensure key metrics are available on mimir 10m0s
It clean up test pod 30s
It scales node by creating anti-affinity pods 15m12s
It has a at least one storage class available 5m0s
It creates the new namespace for the test 0s
It creates the PVC 0s
It creates the pod using the PVC 0s
It binds the PVC 1m0s
It runs successfully 20m0s
It deletes all resources correct 15m0s
It cluster is registered 0s
AfterSuite 3m42s

📋 View full results in Tekton Dashboard


Rerun trigger:
/run cluster-test-suites

Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

To run this test suite as a major upgrade, which will test upgrading from the latest release of the previous major version, you can add IS_MAJOR_UPGRADE=true, e.g. /run cluster-test-suites IS_MAJOR_UPGRADE=true.


Available Test Suites

By default, only the standard test suite runs to reduce costs. If your changes affect specialized environments, you can specify additional test suites:

AWS (CAPA) Test Suites

  • standard - Basic cluster creation and functionality
  • karpenter - Karpenter cluster creation testing
  • china - China-specific environment testing
  • private - Private cloud environment testing
  • cilium-eni-mode - Cilium ENI mode testing
  • upgrade - Cluster upgrade testing
  • upgrade-major - Major version upgrade testing

How to Specify Additional Test Suites

# Run specific test suites
/run cluster-test-suites TARGET_SUITES=./providers/capa/standard,./providers/capa/china

# Run all test suites for CAPA
/run cluster-test-suites TARGET_SUITES=./providers/capa/

# Run upgrade tests
/run cluster-test-suites TARGET_SUITES=./providers/capa/upgrade,./providers/capa/upgrade-major

Note: Full test suites run automatically on releases. You are responsible for testing all relevant flavors before merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants