-
Notifications
You must be signed in to change notification settings - Fork 450
WIP: PoC Custom pool booting #5361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Skipping CI for Draft Pull Request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The general workflow sounds fine to me, definitely aligns with our general discussions around how to make this work.
The only real "functionality" we're adding is to have the node request additional labels via annotations, which should be safe I feel like since if the CSR went through to make it a node, we should already trust it.
I guess technically the workflow today probably sees users adding labels via machine/machineset objects, so at worst we'd duplicate that part? Either way it shouldn't cause an error
nodeAnnotations := map[string]string{ | ||
daemonconsts.CurrentMachineConfigAnnotationKey: conf, | ||
daemonconsts.DesiredMachineConfigAnnotationKey: conf, | ||
daemonconsts.FirstPivotMachineConfigAnnotationKey: conf, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, for some reason I forgot we did this, neat
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: djoshy, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hello, Just for information we started to use the managed user-data in our machineset definition (To get the correct final config at startup). It works however we rapidly faced issues with clusters that were created with previous version of openshift. |
Hi @jlhuilier-1a ! You're correct, the reason you're running into that is because your boot images are likely out of date. The *-user-data stub needs to be compatible with the boot image referenced by your machineset. Some more context can be found here. The boot image update mechanism described in this document also attempts to upgrade these secrets to the newest version for the same reason. As for this PR, the instructions described were just the easiest path to test 😄 You could also copy an existing user-data secret (say, call it infra-user-data) and then edit the MCS endpoint within the stub to target the infra pool. Then, edit the machineset to reference that instead; this should work in a similar fashion. When we do eventually bring this feature to GA, we will be sure to make a note of this in the workflow - thanks for bringing this up, appreciate your review! |
Disclaimer: I mainly just wrote this for fun, to see if it was easy to do 😄 I might be missing certain use cases, but just wanted to open it up for wider review.
- What I did
machineconfiguration.openshift.io/firstPivotConfig
) that stores the very first config that was served to the node.machineconfiguration.openshift.io/customPoolLabelsApplied
) for bookkeeping purposes after this, so it doesn't attempt to label the node again if it was removed by some other actor.- How to verify it
infra
:- Edit its name to be unique.
- Change the MachineSet's userDataSecret field to
infra-user-data-managed
if you used theinfra
MCP I provided in the previous step. If you used another MCP name, use$MCP_NAME-user-data-managed
instead.- I also changed the cluster-api-machineset labels to match, but I am unsure if it is needed.