-
Couldn't load subscription status.
- Fork 220
Add test case for 3 control plane nodes and internal load balancer #1550
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@bochengchu: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: bochengchu The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
✅ Deploy Preview for kubernetes-sigs-cluster-api-gcp ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Hi @bochengchu. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
| containers: | ||
| # Change the value of image field below to your controller image URL | ||
| - image: gcr.io/k8s-staging-cluster-api-gcp/cluster-api-gcp-controller:e2e | ||
| - image: ${CONTROLLER_IMAGE} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to to do this? The tests are running I believe?
It's not a blocker for ok-to-test, but it will likely be a blocker for merging. If we have to do this I suggest splitting it out into its own commit (or PR?), and a comment would help out future selves understand why
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed for the GKE cluster, since we have to push the image to GCR. Unlike the existing kind clusters where we can load the image into the cluster, for GKE we have to upload it.
|
|
||
| # create a GKE cluster to be used as a bootstrap cluster | ||
| create_gke_bootstrap_cluster() { | ||
| gcloud container clusters create "${TEST_NAME}-gke-bootstrap" --project "$GCP_PROJECT" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK this is nice, because we can we can develop Workload Identity support, and then hopefully add the same thing to kind. (Workload Identity is Google's name for automatic authentication to GCP without needing a ServiceAccount Key in a secret, historically it used a proxy but now it can use identity federation via IAM workloadPools).
But ... probably could be a separate commit or PR. This one is less of a blocker though IMO, because it's additive
| if [[ -n "${SKIP_INIT_GKE_BOOTSTRAP:-}" ]]; then | ||
| echo "Skipping GKE bootstrap cluster initialization..." | ||
| else | ||
| create_gke_bootstrap_cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't have permission to create GKE clusters in prow (and we probably don't want to take a hard dependency on GKE in OSS testing). I suggest this should be a different script, or that the default behavior should not change (e.g. only create a GKE cluster if TEST_MANAGEMENT_CLUSTER=gke or something like that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, if we end up using GKE cluster I can separate that.
| @@ -0,0 +1,173 @@ | |||
| --- | |||
| apiVersion: cluster.x-k8s.io/v1beta1 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not something to be addressed in this PR (i.e. this problem already exists), but we should consider using kustomize or patching a base config in code, so we can understand more easily what is different in each scenario.
| Expect(e2eConfig).ToNot(BeNil(), "Invalid argument. e2eConfig can't be nil when calling %s spec", specName) | ||
| Expect(clusterctlConfigPath).To(BeAnExistingFile(), "Invalid argument. clusterctlConfigPath must be an existing file when calling %s spec", specName) | ||
| Expect(bootstrapClusterProxy).ToNot(BeNil(), "Invalid argument. bootstrapClusterProxy can't be nil when calling %s spec", specName) | ||
| Expect(bootstrapGKEClusterProxy).ToNot(BeNil(), "Invalid argument. bootstrapGKEClusterProxy can't be nil when calling %s spec", specName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what this is (yet). Is this required if we use kind for the bootstrap cluster (instead of GKE)?
|
|
||
| Context("Creating a control-plane cluster with an internal load balancer", func() { | ||
| It("Should create a cluster with 1 control-plane and 1 worker node with an internal load balancer", func() { | ||
| // This test requires a GKE bootstrap cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it have to require a GKE cluster? I don't think we allow GKE in prow (though we'll find out in a minute I guess!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't required to be GKE, but using a GKE cluster is the simplest way I can think of. Because of the ILB, we have to let the controllers stay within the same private network. GKE is one way to do that, and we can also have a kind cluster like the existing tests, but it'd be much more complicated to configure that. One possibility I found is to use Cloud VPN and create a tunnel or something, and we have to configure the network (and router?) stuff on where the kind cluster is hosted.
|
I'm not sure we can take a dependency on GKE in our prow tests, and I don't know if we have to (maybe it's because otherwise the internal LB is not reachable?) But ... that's a potential blocker for merging, not for testing... /ok-to-test |
|
@bochengchu: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
In addition to the permission to create a GKE cluster, the permission to push image to GCR is also needed in this PR. I'm not sure how can we add that though. |
What type of PR is this?
/kind other
What this PR does / why we need it:
Follow-up for #1536. This should fail now, but should be passing after #1533 is merged.
Release note: