-
Notifications
You must be signed in to change notification settings - Fork 31
🌱 Prefactoring: controller-sharding #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Dr. Stefan Schimanski <[email protected]>
@zachsmith1: changing LGTM is restricted to collaborators In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sttts, zachsmith1 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
return nil | ||
// Check if we already have this cluster engaged with the SAME context | ||
if old, ok := c.clusters[name]; ok { | ||
if old.cluster == cl && old.ctx.Err() == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might be a bit of a stupid Go question, but this comparison will fail when the pointer backing the interface points to a different object, right? Should we be concerned that providers might construct cluster objects on the fly and could depend on names being equal to stop replacing an existing cluster?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes—interface equality on a pointer-backed cluster compares pointer identity, so a new instance for the same name won’t equal the old one; that’s intentional because a new instance usually means kubeconfig/impl changed and we should re-engage. if a provider churns instances but is semantically the same, we can relax by also checking a stable handle (e.g., cl.GetCache()
) or document that providers should return a stable instance per name unless they intend a reattach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense, I think this is a valid assumption. I wonder if we should document this explicitly though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Signed-off-by: Dr. Stefan Schimanski <[email protected]>
All the changes from #74 that are not directly related to sharding. To make it easier to review.