-
Notifications
You must be signed in to change notification settings - Fork 27
✨ Extend interface with runner #62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mjudeikis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I was also thinking extending the provider interface for something like this makes sense. I wonder if it would helpful to have the multicluster manager's existing start function also call the provider's Run func before it sets everything up, and then maybe SetupWithMgr logic could fit somewhere in there too. |
@embik and I been discussing this async. My take was that one might want to control lifecycle of those. I personally find these things left flexible. And result is still the same, as manager inside operates on Like:
means you still need interface exposed. |
c3c267a
to
650df41
Compare
650df41
to
93f2c78
Compare
Yeah, the "who is responsible" is definitely tricky. On the other side of things it can be easy to forget to start the provider (which happened to me at one point 😅 ) What would you think of renaming Run() on the provider to something like StartupHook() and have it return a func? Maybe then we get the best of both worlds, a managed startup and clearer responsibility. func (p *scopedManager) Start(ctx context.Context) error {
prov := p.Manager.GetProvider()
if prov != nil && prov.StartupHook != nil {
if err := prov.StartupHook(ctx); err != nil {
return fmt.Errorf("failed to run provider: %w", err)
}
}
return p.Manager.GetLocalManager().Start(ctx)
} |
I think I would be ok. In my current POC I kinda building towards idea that we would need |
IMHO it would make some sense for the manager to start the provider. So at the moment it will always end in this setup: provider := myprovider.New()
mgr, _ := mcmanager.New(ctrl.GetConfigOrDie(), provider, mcmanager.Options{})
provider.SetupWithManager(mgr) // Or something else to set the manager
go mgr.Start(ctx)
go provider.Run(ctx)
// wait Which could instead be: provider := myprovider.New()
mgr, _ := mcmanager.New(ctrl.GetConfigOrDie(), provider, mcmanager.Options{}) // manager sets itself on the provider
mgr.Start(ctx) // manager starts the provider When I read the interfaces for the first time I found it odd that the provider interface had no However I think that was by design as it might be too early to decide the lifecycle of both the provider and manager, as that could stifle design choices. If the lifecycle is less flexible that could result in some designs not being possible. Then again - the lifecycles are already linked somewhat, so it would make sense to include that in the design. StartWithManagerIf the lifecycle is linked like this I'd prefer func (p *Provider) SetupWithManager(mgr mcmanager.Manager) error {
if mgr == nil {
return ErrMgrNil
}
p.mgr = mgr
}
func (p *Provider) Start(ctx context.Context) error {
if p.mgr == nil {
return ErrMgrNil
}
// ...
} or: func (p *Provider) StartWithManager(ctx context.Context, mgr mcmanager.Manager) error {
if mgr == nil {
return ErrMgrNil
}
p.mgr = mgr
// ...
} And providers that would also work without a mgr wouldn't care either way. |
So if I understand correctly, you are suggesting using |
Exactly, looking at some sample code the |
I have a hard time following everyone's position to be honest. I can see two cases (which I think map to what you're suggesting, but we stand a bit in a middle today?)
I think 1 makes it easier for lifecycle and requires less provider-specific management. 2 gives a lot of control and indepedence from the rest of the controllers (could the controller bog down the manager and therefore block the provider from doing its job -- maybe there could be a small chance of deadlock?), but I'm not sure the added complexity is worth it. In conclusion, if we go for 1, the only "external" requirement is for the provider to be a runnable for the manager to manage. |
We don't stand in the middle - the status quo is 2: The provider is entirely independent from the manager. That has some problems, e.g. the user needs to know when the provider expects to be run. Before the manager? After the manger? When should the manager be set on the provider? Before or after starting the manager? E.g. in the multicluster-runtime/pkg/manager/manager.go Lines 201 to 210 in a41032c
Say a provider only has the prov := myprov.New(myprov.Opts{})
mgr, err := mcmanager.New(cfg, prov, mcmanager.Opts{})
prov.SetManager(mgr)
mgr.Add(myrunnable1)
mgr.Add(myrunnable2)
return mgr.Start(ctx) In reality the
Correct - and to prevent the provider starting to provide clusters too early the manager should pass itself when running the provider. When @mjudeikis and I were talking he also suggested that the type Provider interface {
Start(context.Context, Aware)
Get(...)
IndexField(...)
} Providers that act like a controller (e.g. cluster-api or gardener) are already expecting to be passed a manager anyhow because they need their manager to target whatever cluster they pull the data from, not necessarily the cluster they are running in. |
Yeah, this might be an option.
It sounds reasonable. Need to test this in the code. So I think im leaning towards:
Does anybody have anything against this? |
If the method takes a manager I like that. |
I think only issue could be circularity in go. Need to shuffle code and see as I dont have (and I hope I never will need to have) mental model of all packages :) |
We're on the same page! I actually have been pushing towards making it a controller in some PRs because it became quickly clearly we needed a way to make setup and management of a provider simple. I'm not too scared of the circular dependency. I think the provider interface could be put into a separate package alongside helper methods and both the provider implementations and manager would import it. Aware is not too clear to me, thats recent right? What are we aware of? I do like |
multicluster-runtime/pkg/multicluster/multicluster.go Lines 26 to 43 in 00d670c
Since the |
ba8166d
to
d8ebc45
Compare
I updated the code. Moved Having It does not solve the startup ordering problem, but I'm not sure how I feel about that either. I like explicitness. @ntnn @embik @corentone @FourFifthsCode, what's your take on the current iteration? |
providers/single/provider.go
Outdated
p.aware = aware | ||
|
||
if err := p.aware.Engage(ctx, p.name, p.cl); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p.aware = aware | |
if err := p.aware.Engage(ctx, p.name, p.cl); err != nil { | |
if err := aware.Engage(ctx, p.name, p.cl); err != nil { |
Just a nit
return fmt.Errorf("manager is nil") | ||
} | ||
p.mgr = mgr | ||
p.mcmanager = mgr | ||
|
||
// Get the local manager from the multicluster manager | ||
localMgr := mgr.GetLocalManager() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not specifically for this change but it comes up now with the manager - wouldn't it be better to give kubeconfig a config, and the provider spins up a manager by itself, rather than insisting on the same cluster?
flowchart LR
subgraph computeCluster[Compute Cluster]
kubeconfigMCP[MCR-based operator with Kubeconfig provider]
end
kubeconfigSecrets -.-> kubeconfigMCP
subgraph secretsCluster[Cluster with Secrets]
kubeconfigSecrets[Kubeconfigs]
end
kubeconfigMCP --> target1
kubeconfigSecrets -.-> target1
kubeconfigMCP --> target2
kubeconfigSecrets -.-> target2
subgraph target1[Target Cluster 1]
end
subgraph target2[Target Cluster 2]
end
Then the setup would resolve itself:
cfg := ctrl.GetConfigOrDie()
kconfigProvider := kubeconfig.New(kubeconfig.Options{
Config: cfg, // this could be the same config as for mcmanager or a config for another cluster to read the secrets from
})
mcmgr, _ := mcmanager.New(cfg, kconfigProvider, mcmanager.Options{})
return mcmger.Start(ctx)
@FourFifthsCode What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would open things up a bit! Would just need to try and make it really clear where users are getting their secrets from and hopefully make it straight forward to configure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does almost feel cleaner to have the provider start the manager, but then that might make bootstrapping more complicated on the manager side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah it would - and it would block e.g. having multiple providers in one manager
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do this in a follow-up. While this sounds right, this somehow breaks the pattern we been using. I think I want to see it in separe PR so we can discuss.
defer p.lock.Unlock() | ||
|
||
p.aware = aware |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defer p.lock.Unlock() | |
p.aware = aware | |
p.aware = aware | |
p.lock.Unlock() |
Otherwise the provider locks I think? Because the reconcile also locks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But there should not be any reconciliation until started. However, it's better to be safe than sorry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct but the .Start
is blocking until the context is done and holds the lock at the same time, so the Reconcile method will never execute because it cannot acquire the lock
@corentone I think you laid out the positions clearly!
I tend to lean toward option 1 myself because I view managers as a framework that helps get things bootstrapped and if I really need something custom I can just use informers directly. The Runnable interface is designed around this. The one thing that is tricky is the circular dependency. Providers with controllers needs access to the manager, but managers need access to the clusters a provider manages 😅 Also not all providers need a controller. |
I will ask differently. Does whats currently in the code would solve your issues/challenges you seen on your end or are you missing something? |
When creating vendor-agnostic providers, start by establishing your provider alongside all applications. This means you need to wire in every supported provider into the depths of the code, as the main interface does, to support
Run
.The majority of providers will be runnable, so I think it makes sense to make it a permanent member of the interface. But
SetupWithManager
- no. And it can be done way earlier in the lifecycle of the program.Now, if you pass in the code
mcmanager.Provider
, you can't start it, as the method is not exported.So you end up with:
vs just having a single interface to back any implementation.