generated from SAP/repository-template
-
Notifications
You must be signed in to change notification settings - Fork 1
fix(deps): update non-minor dependencies #59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
renovate
wants to merge
1
commit into
main
Choose a base branch
from
renovate/non-minor-deps
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+3
−3
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
a9d5ac5 to
96c1f80
Compare
4b6c81c to
20cf6d3
Compare
20cf6d3 to
1fbb590
Compare
1fbb590 to
e15ee19
Compare
e15ee19 to
82cccd7
Compare
82cccd7 to
d4a30f8
Compare
d4a30f8 to
491d172
Compare
491d172 to
6de456e
Compare
6de456e to
806a18f
Compare
806a18f to
f1a7623
Compare
b8c9242 to
6cabed5
Compare
2195f89 to
2207ce5
Compare
3207006 to
44c620e
Compare
9ee1c69 to
529a7e9
Compare
c43af77 to
8675f9c
Compare
8675f9c to
a5e2ac6
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v0.3.67->v0.3.1151.25.1->1.25.3a9b7c2d->fb2beabRelease Notes
sap/component-operator-runtime (github.com/sap/component-operator-runtime)
v0.3.115Compare Source
v0.3.114Compare Source
v0.3.113Compare Source
v0.3.112Compare Source
v0.3.111Compare Source
v0.3.110Compare Source
v0.3.109Compare Source
v0.3.108Compare Source
v0.3.107Compare Source
Enhancements
Status handling
So far, the observed generation of dependent objects was expected at
status.observedGeneration. Some newer projects seem to populatestatus.conditions[type=*].observedGeneration, additionally, or even exclusively.Starting with this release, if a status hint of
has-observed-generationis present, but nostatus.observedGenerationis found, the framework will look at.status.conditions[type=Ready].observedGenerationas a fallback.Secret handling
It is well known that the
stringDatafield of secrets is not handled properly during server-side-apply:To overcome, we will now merge
stringDataintodatabefore applying and clearstringData. This should be consistent with the logic that would otherwise happen in the API server:See SAP#313.
Deletion handling
It is difficult to move dependent objects from one component to another. That means: adding it to the source manifest of the new one, and removing it from the sources of the old one.
The typical flow looks like this:
adoption-policy: always, at least for a certain time, until the change is rolled out everywhere.Then, when applying that, the new component will force-adopt the existing object, and the old component will run into an owner conflict error. This either has then to be solved manually by patching the old component's inventory, or (in a previous apply), the object must have been applied by the old component with
delete-policy: orphanordelete-policy: orphan-on-apply.It would be nice (and reasonable) if - upon dependent deletion, either due to component apply or component deletion - dependents with no or a different owner would be auto-orphaned. This would eliminate the need of the manual post-processing or the preflight apply run setting delete policies (as explained above).
v0.3.106Compare Source
v0.3.105Compare Source
v0.3.104Compare Source
v0.3.103: v03.103Compare Source
Bug fixes
Fixes #305.
v0.3.102Compare Source
Bug fixes
This release fixes #302.
v0.3.101: v03.101Compare Source
Bug fixes
Fixes SAP#299.
v0.3.100Compare Source
v0.3.99Compare Source
Enhancements
KustomizeGenerator
We add the possibility to make the
KustomizeGeneratorignore certain files by creating a file called.component-ignorein the kustomization directory (that is the directory passed toNewKustomizeGenerator().The file uses the usual
.gitignoresyntax.Notes:
.component-config.yamland.component-ignoreare always included.kustomization.yaml(in case the directory does not have one).readFilefunction (and related functions).v0.3.98Compare Source
Enhancements
This release adds the following metrics:
v0.3.97Compare Source
v0.3.96Compare Source
Enhancements
This release makes the force-reapply interval configurable on component and object level.
So far, every object was force-reapplied to the cluster every 60 minutes (even it is seemed to be in sync according to digests). Now, that interval can be tweaked
on component level by implementing the new interface
or by including the convenience type
ReapplySpecinto the component's spec.on object level (which has higher precedence than the value specified on component level) by setting the annotation
mycomponent-operator.mydomain.io/reapply-interval.v0.3.95Compare Source
Fixes/Improvements
This release makes the backoff (used by the reconciler in non-error situations) a little less aggressive.
So far, about 50 almost immediate retries were possible in the first 5 seconds. Now that is reduced to 5 retries, and also the medium backoff range (happening before the final
maxDelaybackoff kicks in) is relaxed a bit.To be precise: per backoff topic, the backoff is now
v0.3.94Compare Source
v0.3.93Compare Source
v0.3.92Compare Source
Fixes
This release is a bugfix release. It fixes #279 and #281.
v0.3.91Compare Source
Enhancements
This release enhances
HelmGeneratorandKustomizeGenerator, adding mechanisms to access files from the provided Helm chart resp. Kustomization directory.For
HelmGeneratorthat means, that the.Filesbuiltin is now supported. With some small differences to the original Helm behavior:Chart.yaml,templates/,crds/and so on) cannot be retrieved through.FilesHelmGeneratorcurrently does not honor the.helmignorefile at all, files that would be excluded with regular Helm can be accessed through.FileswithHelmGenerator.For
KustomizeGenerator, there are now three new template functions available:readFile: takes a path relative to the Kustomization directory and returns the raw content (as []byte slice)existsFile: also expects a path relative to the Kustomization directory and returns a boolean indicating whether the file exists or notlistFiles: accepts a pattern and returns a all matching file paths in the Kustomization directory as string list.v0.3.90Compare Source
v0.3.89Compare Source
v0.3.88Compare Source
Fixes
With restricted privileges (that is not having full
cluster-adminrights) Kubernetes checks that (cluster)roles assigned through a (cluster)rolebinding do exist (probably it needs to read it in order to perform the escalation check).That is, (cluster)roles must be always created before any (cluster)rolebindings referencing them.
Previously, component-operator-runtime applied roles and bindings in one and the same wave; with the effect that a deadlock could happen if the binding happened to be first. This is now corrected, that is, (cluster)roles are created before (cluster)rolebinindings.
v0.3.87Compare Source
Incompatible changes
This release slightly changes the processing timeout logic.
Previously the processing timeout was reset (means:
status.processingSincewas set to the current time), and then started counting down whenever the component digest changed. This had the effect that a component starting inPendingstate (for example because it is waiting for another dependent component to become ready) might time out before it ever got a chance to really process. With the consequence that, once it eventually started processing, its state was immediatelyErrorwithTimeoutreason.From now on,
status.processingSinceis not set immediately when the digest changes, but later, when the first apply iteration happens after the digest change.v0.3.86: v03.86Compare Source
Fixes
This is a bugfix release; it fixes #265 and #267.
The logic around the
SsaOverrideupdate policy should now work reliably.Incompatible changes
Previously, with
SsaOverridefields owned by field managers starting withkubectlorhelmwere reclaimed by component-operator-runtime and, if the intended manifest did not have an opinion on these fields, dropped. With that releasehelmis removed from this list of specially treated field managers. This change is necessary to avoid clashes with flux's helm-controller which uses field managerhelm-controller.v0.3.85Compare Source
v0.3.84Compare Source
Changes
This is a bugfix release; see SAP#260.
v0.3.83Compare Source
This release is about revisiting/improving the timeout handling of components.
Improving the logic of the processing/timeout flow
It is well-known that every component has a processing timeout. Components can specify the timeout value by implementing the
component.TimeoutConfigurationinterface. Otherwise (or if a zero timeout is specified), it will be defaulted by the effective requeue interval, which defaults to 10 minutes.Then, note that a component can be in a 'processing' or 'non-processing' state (which is not directly related to
status.statebeingProcessing). Here, 'processing' means thatstatus.processingSinceis non-initial. Now, if a component is reconciled, a certain component digest is calculated from the component's annotations, spec and references in the spec (see below for more details about references). Whenever this component digest differs from the currentstatus.processingDigest, thenstatus.processingSinceis set to the current time, andstatus.processingDigestis set to the new component digest.Roughly spoken, that means a new timeout countdown is started.
In addition to 'processing' a component can be in a 'timeout' state; this is the case if the
status.processingSincetimestamp lies more than the specified timeout duration in the past. If a component gets into the 'timeout' statestatus.state) will be set toErrorwith condition reasonTimeoutErrororPending), and the condition reason is set toTimeout.That means, a timeout can always be reliably detected by checking if the condition reason equals
Timeout.A 'processing' component will be set to 'non-processing' (that is,
status.processingSinceis cleared) if the component becomes ready (in that case, in addition, one immediate requeue is triggered).Calculation of the component digest
At the beginning of the reconcilation of a component, a (component) digest is calculated that considers
metadata.annotationsof the componentmetadata.generationresp. the spec of the componentConfigMapReference,ConfigMapKeyReference,SecretReference,SecretKeyReference,Reference.Such references will be automatically loaded at the beginning of the reconcile iteration; for the builtin
ConfigMapandSecretreference types the logic is part of the framework, and for types implementing theinterface, the loading and digest logic is to be provided by the implementation. Besides being used in the timeout handling as
status.processingDigest, the component digestOnObjectOrComponentChange.Roughly speaking, the component digest should identify result of reconciling the component as exact as possible; that means: applying two components with identical digest should produce the same cluster state.
Incompatible changes
Besides the changes outlined above (which should not have a bad impact) this release contains the following incompatible changes:
status.statewas set toPendingwith reasonPending, respectively toDeletionPendingwith reasonDeletionPending; the reason values are changed toRetryingandDeletionRetrying, respectivelyRestartingwas added, that will be used withstatus.statebeingPending, if the processing state of a component is reset due to a component digest change.v0.3.82Compare Source
v0.3.81Compare Source
v0.3.80Compare Source
v0.3.79Compare Source
Enhancements
So far, the framework emitted really many component events, mostly if the component is in
Processingstate. That often exceeded the burst of the event broadcaster provided by controller-runtime (b=25, r=1/300, see https://github.com/kubernetes/client-go/blob/b46275ad754db4dd7695a48cd3ca673e0154dd9e/tools/record/events_cache.go#L43).We change that now. If there are identical subsequent events produced for a component, only the first one will be emitted within 5 minutes; after 5 minutes, again one instance of the throttled event may be sent, and so on.
v0.3.78Compare Source
Notable changes
status.ProcessingDigest) is now considering themetadata.generationof the component.v0.3.77Compare Source
Enhancements
New methods are added to
cluster.Client:In addition there is a new reconciler option
with
This allow to replace or modify the default component/hook client that would be used by the reconciler
v0.3.76Compare Source
v0.3.75Compare Source
Enhancements
Additional managed types
By its nature, component-operator-runtime tries to handle extension types (such as CRDs or API groups added through APIService federation), and instances of these types, in a smart way.
That is, if the component contains extension types, and also instances of these types, it tries to process things in the right order; that means, during apply the instances will be applied as late as possible (to ensure that controllers and webhooks are up); and during delete, the instances will be deleted as early as possible (to ensure that controllers and webhooks are still there). Furthermore, during deletion, foreign instances (that is, instances of these types that are not part of the component) block the deletion of the whole component.
Sometimes, components are implicitly adding extension types to the cluster; in the sense that the extension types are not explicitly part of the manifests, but added in the dark through controllers, once running. A typical example are crossplane providers.
This PR tries to add some relief in this situation. Components can now list 'additional managed types', by implementing the
TypeConfigurationinterface; these 'additional managed types' will be treated in the same way as extension types which are explicitly mentioned in the manifest.Improved APIService handling
Up to now,
APIServiceobjects were deployed along with the other regular (that was: unmanaged) objects of the current apply wave. As a consequence, if the federated API server was not yet ready,stale group versionerrors were returned by the discovery API of the main API server. To overcome this problem,APIServiceobjects receive a special handling now, in the sense that they are reconciled (in the apply wave) after all other regular objects, and before all managed instances. That means: within each apply order, objects are deployed to readiness in three sub stagesAPIServiceobjects)APIService)Within each of these sub groups, the static ordering defined in
sortObjectsForApply()is effective.More robust handling of external recreations happening during deletion
Previously there was a rare race condition while deleting objects (either during component delete or component apply):
The old logic was:
ScheduledForDeletionduring apply or if the whole component is being deleted); if successful (that is API server responds with 2xx) then the inventory status of the dependent object is set toDeleting.Now, if the object was recreated by someone right between 1. and 2. then the reconciler went stuck.
Note that really does not happen usually (also because the critical period is very, very short).
To overcome, we are now checking the deletion timestamp of the dependent object (if still or again existing). If it has none, then we check the owner; if it is not us, then we give the object up (because apparently, someone else has just recreated it).
v0.3.74Compare Source
Improvements
So far, there was no special logic to check status status of
CustomResourceDefinitionandAPIServiceresources.That is, they were considered ready immediately, which was causing problems (for example, lookup errors when querying the discovery API immediately after creating an
APIService, such as... stale GroupVersion discovery ...).To mitigate, the default status analyzer now evaluates existing conditions (such as the
Availablecondition ofAPIService).v0.3.73Compare Source
v0.3.72Compare Source
Incompatible changes
Background: values passed to the built-in generators and transformers
are of type
map[string]any. Of course, templates are rendered with themissingkey=zerooption.But still, if a key is missing in the values, the empty value of
any(returned in this case)makes the go templating engine return
<no value>in that case.Helm decided to override that by replacing all occurrences of the string
<no value>in any template output.Starting with this PR we adopt the helm approach, and do the same.
v0.3.71Compare Source
Incompatible changes
Orphanis slightly changed; previouslyOrphanhad no effect if a dependent object became redundant during apply (that is, it was part of the component manifest before, and is no longer now). Now, if an object has an effective deletion policyOrphan, then it will be always orphaned ifEnhancements
OrphanOnApplyandOrphanOnDelete, with the obvious meaning.apiResourcesis added forKustomizeGenerator. It returns[]*metav1.APIResourceList, as returned by the discovery client's methodServerGroupsAndResources, see https://pkg.go.dev/k8s.io/[email protected]/discovery#ServerResourcesInterface.ServerGroupsAndResources.v0.3.70Compare Source
Changes
This release finalizes the reworking of the force-reapply logic started in https://github.com/SAP/component-operator-runtime/releases/tag/v0.3.62.
So far, a dependent object was applied to the cluster if
status.inventory[].lastAppliedAttimestamp is set and is more than 60m in the past.The third condition is now changed to
status.inventory[].lastAppliedAttimestamp is not set, or is set and is more than 60m in the past.As a consequence, the component CRD now must contain the
status.inventory[].lastAppliedAtfield, that is the consumers must have regenerated their CRD to reflect the current component-operator-runtime API types, as already stated in the release notes of v0.3.62.v0.3.69Compare Source
Enhancements
Starting with this release, the deletion of dependent objects will fail unless the existing value of the owner-id label of the dependent object matches the component that wants to delete it. If the owner-id label is missing, or the value does not match, the deletion will be rejected.
v0.3.68Compare Source
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log.