-
Notifications
You must be signed in to change notification settings - Fork 218
NE-1969: Set Degraded=True if unmanaged Gateway API CRDs exist #1205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NE-1969: Set Degraded=True if unmanaged Gateway API CRDs exist #1205
Conversation
|
@alebedev87: This pull request references NE-1969 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
ef31360 to
c80f70c
Compare
|
Rebased to catch up with the controllerName change. |
|
@alebedev87: This pull request references NE-1969 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/retest |
|
/assign |
|
/test e2e-aws-operator-techpreview |
| controllerName = "gatewayapi_controller" | ||
| experimantalGatewayAPIGroupName = "gateway.networking.x-k8s.io" | ||
| crdAPIGroupIndexFieldName = "crdAPIGroup" | ||
| gatewayCRDAPIGroupIndexFieldValue = "gateway" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A one-word indexer feels too expansive. Why not at least "gateway.networking"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The controller-runtime documentation actually says to use the field name (in our case, that would be "spec.group"), and the index value should be the field value (for example, "gateway.networking.x-k8s.io"). But we have written a lot of code that doesn't follow the prescribed idiom, and it seems to work fine, so I suspect that the index key can be any string that is unique among indexes within the manager.
https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client#FieldIndexer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense to only be looking at the spec.group.
| controllerName = "gatewayapi_controller" | ||
| controllerName = "gatewayapi_controller" | ||
| experimantalGatewayAPIGroupName = "gateway.networking.x-k8s.io" | ||
| crdAPIGroupIndexFieldName = "crdAPIGroup" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aligning these two variable names would be helpful.
| crdAPIGroupIndexFieldName = "crdAPIGroup" | |
| gatewayCRDAPIGroupIndexFieldName = "crdAPIGroup" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The indexer indexes by the crd group, which can be gateway or not gateway. So, crdAPIGroupIndexFieldName reflects this purpose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the name to gatewayAPICRDIndexFieldName which will used in the index with unmanaged value.
| expectStatusUpdate: []client.Object{}, | ||
| expectStartCtrl: true, | ||
| }, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add another test for experimental gateway CRDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a test case called unmanaged gateway API CRDs just above this one, it has a CRD ListenerSet and verifies that the status is updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That ListenerSet CRD is already existing in existingObjects. I'm asking for test case where it tries to add an experimental CRD and shows that it makes the operator Degraded=true. This could be in pkg/operator/controller/status/controller_test.go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm asking for test case where it tries to add an experimental CRD and shows that it makes the operator Degraded=true.
This is tested in the e2e test I added, because it involves 2 different controllers (gatewayapi and status).
| // Index Gateway API CRDs by the "spec.group" field | ||
| // to enable efficient filtering during list operations. | ||
| // Currently, only Gateway API groups have a dedicated index field. | ||
| if err := mgr.GetFieldIndexer().IndexField( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you create two indexers, one for managed and one for unmanaged CRDs? See https://github.com/openshift/cluster-ingress-operator/pull/1205/files?diff=split&w=1#r2019309675
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the purpose of having two indexers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see your comment on listUnmanagedGatewayAPICRDs now.
Wouldn't it make more sense to do this?
client.IndexerFunc(func(o client.Object) []string {
if isGatewayAPICRD(o) {
return []string{"managed"}
}
return []string{"unmanaged"}
})and then
r.cache.List(ctx, gatewayAPICRDs, client.MatchingFields{crdAPIGroupIndexFieldName: "managed"});or
r.cache.List(ctx, gatewayAPICRDs, client.MatchingFields{crdAPIGroupIndexFieldName: "unmanaged"});There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite know why we need this indexer, which picks up managed and unmanaged CRDs together, but there is code around https://github.com/openshift/cluster-ingress-operator/pull/1205/files?diff=split&w=1#r2019309675 that gets all the CRDs from this index and then iterates through to get the unmanaged ones. One indexer for unmanaged CRDs should be enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I guess my last comment doesn't make sense. The index is used in order to get all Gateway API CRDs, be they managed or be they unmanaged, and then the loop in listUnmanagedGatewayAPICRDs checks each of those Gateway API CRDs to determine whether any unmanaged one was indexed. This makes sense to me: The cluster could have many CRDs, and we only index the ~half dozen (at most) that belong to Gateway API. It's probably cheaper to iterate over a half dozen CRDs than it would be to maintain another index to try to optimize the loop away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that 2 indexes do not make sense as "managed" index would be of no use in this case.
One indexer for unmanaged CRDs should be enough.
This may be another approach to the problem indeed. I was thinking more of the field name which matches the CRD field spec.group (similar to what Miciah described here). I didn't respect the naming convention though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One indexer for unmanaged CRDs should be enough.
My understanding from our call today is that we agreed on this approach: using one index, which has only CRDs that are both in any of the API groups that belong to Gateway API and not managed by cluster-ingress-operator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the index to have a field for all unmanaged gateway API CRDs.
|
Before I go any further, I have some questions about the departure from the design pattern we've used in the past for setting the ingress operator Degraded condition. It is the ingress operator that sets its own conditions from its own probes, as in https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/ingress/status.go#L54, merging conditions if needed. The design in this PR gets a status from the gateway api controller, which marks the ingress controller extension. Then the ingress controller consults its extension. Is there a requirement to use the extension that I'm not really aware of? |
| if len(current.Status.Extension.Raw) > 0 /*to avoid "unexpected end of JSON input" error*/ { | ||
| if err := json.Unmarshal(current.Status.Extension.Raw, currentExtension); err != nil { | ||
| return false, fmt.Errorf("failed to unmarshal status extension of cluster operator %q: %w", current.Name, err) | ||
| } | ||
| } | ||
| if equality.Semantic.DeepEqual(*currentExtension, *desiredExtension) { | ||
| return false, nil | ||
| } | ||
|
|
||
| updated := current.DeepCopy() | ||
| rawDesiredExtension, err := json.Marshal(desiredExtension) | ||
| if err != nil { | ||
| return false, fmt.Errorf("failed to marshal desired status extension of cluster operator %q: %w", updated.Name, err) | ||
| } | ||
| updated.Status.Extension.Raw = rawDesiredExtension | ||
| if err := r.client.Status().Update(ctx, updated); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This stomps any unrecognized data in status.extension. If another controller needed to store some state in status.extension, this logic would be problematic. Could you add a comment to that effect (possibly a "to do" comment to preserve unrecognized data)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you plan to make any other changes, please do check if there's data in status.extension before overwriting it. Do you know what happens if there are several unmanaged CRDs, then one is deleted? Will the operator flip between Degraded true/false in that case?
Otherwise, please add a "todo" comment as @Miciah suggested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the logic to update only UnmanagedGatewayAPICRDNames field.
|
|
||
| // updateClusterOperatorStatusExtension updates a cluster operator's status extension. | ||
| // Returns a boolean indicating whether the cluster operator was updated, and an error value. | ||
| func (r *reconciler) updateClusterOperatorStatusExtension(ctx context.Context, current *configv1.ClusterOperator, desiredExtension *status.IngressOperatorStatusExtension) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does the extension get returned to normal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed this on our call today. There's some subtlety around the use of marshaling, which omits empty fields. This subtlety warrants a comment to explain why this logic is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a dedicated test to showcase what happens when the unmanaged GWAPI CRDs are removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that a comment would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment added.
|
Most of my recent comments are about minor things. #1205 (comment) (sorting the list of CRDs in the status message) and #1205 (comment) (logic for joining |
8e923b6 to
594f332
Compare
|
/retest |
2 similar comments
|
/retest |
|
/retest |
594f332 to
654b681
Compare
|
Rebased to fix the conflict after the merge of #1213. |
| // - Messages are concatenated. | ||
| // If the new condition has a higher-priority status, it overrides the current one. | ||
| // Status priority (highest to lowest): True, Unknown, False, empty. | ||
| func joinConditions(currCond, newCond configv1.ClusterOperatorStatusCondition, condType string) configv1.ClusterOperatorStatusCondition { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: condType is unused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| currCond.Reason += "And" + newCond.Reason | ||
| currCond.Message += newCond.Message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and elsewhere:
| currCond.Reason += "And" + newCond.Reason | |
| currCond.Message += newCond.Message | |
| currCond.Reason += " And " + newCond.Reason | |
| currCond.Message += " " + newCond.Message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that "And" is better to be not separated by white spaces in the Reason. Another example of reason the same concatenation (w/o whitespace) is from Miciah's cluster dns operator example. The messages are separated but dots so there will be no confusion about where a new one starts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed this a little differently in my PR https://github.com/openshift/cluster-dns-operator/pull/373/files#diff-81f08fdfab85d1173ed67b51c2d429a60efbfb17e3dc6ef01ed0afe2011023f2R488
I fixed it because it looked like gibberish in the CI events graph.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack. I still prefer to stay with Reason message which does not have white spaces because I believe this is Kubernetes native to use Camel case concatenation in cases of long messages (ex: PodReadyToStartContainers, CrashLoopBackOff etc.).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer the way we did it for computing the Progressing condition: https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/status/controller.go#L574-L576 because it is easier to read and review.
| currCond.Reason += "And" + newCond.Reason | ||
| currCond.Message += newCond.Message | ||
| } else if currCond.Status == "" { | ||
| // Degraded=False overrides empty status. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'm curious about why Degraded=False doesn't override Degraded=Unknown in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you misunderstood? Or I did? A unit test for this function would help.
Status == "" is not the same as Status == "Unknown". I think you need to account for Status == "Unknown" in this switch case.
Also, if it is currently Degraded=True for some other reason, then in this case it is Degraded=False for CRD issues, does this update anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Status == "" is not the same as Status == "Unknown". I think you need to account for Status == "Unknown" in this switch case.
If currCond.Status==Unknown then newCond.Status==False should not change the status. We should stay unknown as Miciah proposed in the link I posted above. This case does exactly this: it leaves the currCond.Status untouched if newCond.Status is False (the only exception is when currCond.Status is empty).
Also, if it is currently Degraded=True for some other reason, then in this case it is Degraded=False for CRD issues, does this update anything?
Same thing. If currCond.Status==True this case won't change anything because True + False == True.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I still think this is just too much of a departure from the way we normally do this. At some point, can you please go to https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/ingress/status.go#L550-L552 and change it so it says:
// computeIngressDegradedCondition computes the ingresscontroller's "Degraded"
// status condition, which aggregates other status conditions that can indicate
// a degraded state EXCEPT for degraded conditions dealing with CRDs, in which case
// we followed a completely different pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can you please add a unit test to
Test_computeIngressDegradedCondition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the different reasons for a degraded status are listed in https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/ingress/controller.go#L59-L62
so you should try to define a reason this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can you please add a unit test to
Test_computeIngressDegradedCondition
Yes, we can follow up on this one. There is a lot of code not covered by unit tests including the status controller's degraded condition computation.
Also, the different reasons for a degraded status are listed in https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/ingress/controller.go#L59-L62
so you should try to define a reason this way.
I'm not sure what exactly we should take as inspiration from the ingresscontroller's degraded condition into the cluster operator degraded condition. Can you please describe in more details?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, I still think this is just too much of a departure from the way we normally do this. At some point, can you please go to https://github.com/openshift/cluster-ingress-operator/blob/master/pkg/operator/controller/ingress/status.go#L550-L552 and change it so it says:
// computeIngressDegradedCondition computes the ingresscontroller's "Degraded" // status condition, which aggregates other status conditions that can indicate // a degraded state EXCEPT for degraded conditions dealing with CRDs, in which case // we followed a completely different pattern.
Are you sure you are not mixing up the Degraded status for the ingresscontrollers and the Degarded status for the ingress cluster operator? The comment you are suggesting is for the former which doesn't have anything to do with the GatewayAPI CRDs. It's the ingress cluster operator's Degraded status which uses both the default ingresscontroller's Degarded status and the Gateway API CRDs.
| switch newCond.Status { | ||
| case configv1.ConditionTrue: | ||
| if currCond.Status == configv1.ConditionTrue { | ||
| currCond.Reason += "And" + newCond.Reason |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this continues to be Degraded=True for Gateway API CRDs, this doesn't prevent a string of duplicated statuses does it? E.g. Reason="GatewayAPICRDsDegraded And GatewayAPICRDsDegraded And GatewayAPICRDsDegraded"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
computeOperatorDegradedCondition computes the status from scratch, the current condition from the cluster is not appended with new reasons/messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make a small unit test for this function that demonstrates that it wouldn't keep appending Reasons everytime a reconcile is made and the Degraded=true condition continues?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, a unit test for Reconcile won't be small, I would need to mock the client and some other things probably.
Can you please explain to me how exactly the Degraded condition can be appended to the existing one? Knowing that computeOperatorDegradedCondition does not take co.Status.Conditions as a parameter and that mergeConditions override the condition with the new value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not that the conditions are being appended, it's that the reasons and messages are being appended. Reviewing this again, I see that the condition is blank before the two conditions are joined and it seems this couldn't happen after all.
I wouldn't want to see more and more degraded reasons and messages being concatenated in the future, two is bad enough in my opinion. There is no way to enforce this though. I'll just go on record as not being very impressed with this framework, but I can't override @Miciah's decision so I'll let it go.
| expectCondition: configv1.ClusterOperatorStatusCondition{ | ||
| Type: configv1.OperatorDegraded, | ||
| Status: configv1.ConditionTrue, | ||
| Reason: "IngressDegradedAndGatewayAPICRDsDegraded", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See https://github.com/openshift/cluster-ingress-operator/pull/1205/files#r2042909467. I think the reasons should be separated so they can be examined as tokens later, to ensure there are no duplicates added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No duplicates will be added because the existing degraded condition is not taken into account. The degraded condition is recompute from scratch every time.
Set the ingress cluster operator’s `Degraded` status to `True` if unmanaged Gateway API CRDs exist on the cluster. An unmanaged Gateway API CRD is one with "gateway.networking.k8s.io" or "gateway.networking.x-k8s.io" in its `spec.group` field, and not managed by the gatewayapi controller. This commit uses the cluster operator’s `status.extension` field to store the names of unmanaged CRDs. The gatewayapi controller writes to this field, while the status controller reads it and updates the ingress cluster operator’s `Degraded` status accordingly.
654b681 to
4eb3a46
Compare
| Type: configv1.OperatorDegraded, | ||
| Status: configv1.ConditionTrue, | ||
| Reason: "IngressDegraded", | ||
| Message: `The "default" ingress controller reports Degraded=True: dummy: dummy.`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the revised format?
| Message: `The "default" ingress controller reports Degraded=True: dummy: dummy.`, | |
| Message: `The "default" ingress controller reports Degraded=True. dummy: dummy.`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't change the condition message for this part, so : is what we use right now. I don't want to change this behavior because it's unrelated to this PR.
|
/retest |
|
This test is not often seen as failed but still something to keep an eye on. However this is unlikely to be related to this PR. /retest |
|
@alebedev87: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| // - Reason becomes: "MultipleComponents[Not]" + condType. | ||
| // - Messages are concatenated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Did you drop the MultipleComponents prefix idea?
| Type: configv1.OperatorDegraded, | ||
| Status: configv1.ConditionTrue, | ||
| Reason: "IngressDegradedAndGatewayAPICRDsDegraded", | ||
| Message: `The "default" ingress controller reports Degraded=True: dummy: dummy.Unmanaged Gateway API CRDs found: listenersets.gateway.networking.x-k8s.io.`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this is the revised format. The concatenated message is
dummy.Unmanaged Gateway API CRDs found: listenersets.gateway.networking.x-k8s.io.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And a new degraded message will look something like this if the CRDs also put it in a degraded state:
"One or more other status conditions indicate a degraded state: DeploymentReplicasMinAvailable=False (DeploymentMinimumReplicasNotMet: 0/2 of replicas are available, max unavailable is 1).Unmanaged Gateway API CRDs found: listenersets.gateway.networking.x-k8s.io."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, the readability of the degraded message can be improved.
|
I have a few misgivings about this which can be addressed in a followup. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: candita The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
[ART PR BUILD NOTIFIER] Distgit: ose-cluster-ingress-operator |
Set the ingress cluster operator’s
Degradedstatus totrueif unmanaged Gateway API CRDs exist on the cluster.An unmanaged Gateway API CRD is one with "gateway.networking.k8s.io" or "gateway.networking.x-k8s.io" in its
spec.groupfield, and not managed by the gatewayapi controller.This PR uses the cluster operator’s
status.extensionfield to store the names of unmanaged CRDs. The gatewayapi controller writes to this field, while the status controller reads it and updates the ingress cluster operator’sDegradedstatus accordingly.