-
Notifications
You must be signed in to change notification settings - Fork 421
RFE-7406: Include the K8s ingress resource in namespace inspect collection #2004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@oarribas: This pull request references RFE-7406 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the feature request to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@oarribas: This pull request references RFE-7406 which is a valid jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/retest |
1 similar comment
|
/retest |
|
I think, we can't merge anything without having concrete metrics about how the change will impact the logs size and duration -- Especially after trying to fix and lower the logs sizes due to some issues. I'd prefer waiting for the acceptance of the RFE by the workloads team first. |
|
/retest |
|
@ardaguclu bear in mind that this resource is used only in user application namespaces. This means that must-gather should not collect it and should hence not see a difference in size by default. The difference in size would happen only when the customer runs the inspect on their own application namespaces (something we need quite often to provide good support). And in that case:
|
|
/retest |
3 similar comments
|
/retest |
|
/retest |
|
/retest |
|
it's not clear the test is really failing because the changes in this PR /retest |
|
/retest |
1 similar comment
|
/retest |
|
Checking the failing job [1], I cannot identify where the error with the check is. Not clear if caused by the changes in this PR, or any issue with the check. /retest |
|
/retest |
2 similar comments
|
/retest |
|
/retest |
|
Changed spaces to tabs |
|
/retest |
|
/retest-required |
3 similar comments
|
/retest-required |
|
/retest-required |
|
/retest-required |
|
/retest |
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
|
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
WalkthroughAdded the resource "ingresses.networking" to the set of GroupResources returned by the namespace resource collector used during admin namespace inspection; no control flow or error handling changes were made. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~7 minutes
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge base: Disabled due to 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting
📒 Files selected for processing (1)
pkg/cli/admin/inspect/namespace.go(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**
⚙️ CodeRabbit configuration file
-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.
Files:
pkg/cli/admin/inspect/namespace.go
| {Resource: "poddisruptionbudgets"}, | ||
| {Resource: "secrets"}, | ||
| {Resource: "servicemonitors"}, | ||
| {Resource: "ingresses.networking"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the resource format and collection.
The format {Resource: "ingresses.networking"} may not correctly specify the Kubernetes Ingress resource from the networking.k8s.io API group. Standard schema.GroupResource usage would be {Group: "networking.k8s.io", Resource: "ingresses"}. While other entries also omit the Group field, it's unclear if the dot-notation in the Resource field will be properly resolved by the code consuming this list.
Run the following script to verify how this resource is consumed and whether ingresses are actually collected:
Additionally, please test the inspect command with a namespace containing Ingress resources to confirm they are collected in the output.
🏁 Script executed:
#!/bin/bash
# Find where namespaceResourcesToCollect is called and how the GroupResources are processed
rg -n -C5 'namespaceResourcesToCollect'
# Search for other usage patterns of schema.GroupResource in the codebase
rg -n -C3 'schema\.GroupResource.*Group:.*Resource:'
# Check if there's any parsing logic for dot-notation in resource names
rg -n -C3 'Resource.*\.'Length of output: 239324
Fix resource specification for Kubernetes Ingress resource.
The format {Resource: "ingresses.networking"} is incomplete. The Kubernetes Ingress resource belongs to the networking.k8s.io API group (full group), not just networking.
Change line 34 to one of these correct formats:
{Group: "networking.k8s.io", Resource: "ingresses"}Or if the resource can be discovered by name alone:
{Resource: "ingresses"}The current dot-notation ingresses.networking may not resolve correctly during resource discovery and collection. Testing the inspect command with a namespace containing Ingress resources is necessary to confirm the fix works.
🤖 Prompt for AI Agents
In pkg/cli/admin/inspect/namespace.go around line 34 the resource spec uses an
incorrect dot-notation "{Resource: \"ingresses.networking\"}"; replace it with a
proper Kubernetes resource spec such as either "{Group: \"networking.k8s.io\",
Resource: \"ingresses\"}" or simply "{Resource: \"ingresses\"}" so the Ingress
API group is correctly identified during discovery; update the line accordingly
and run the inspect command against a namespace with Ingresses to verify
collection succeeds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds like ^^ comment is valid. Is there any particular reason of using ingresses.networking?.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find where namespaceResourcesToCollect is called and how it's used
echo "=== Usage of namespaceResourcesToCollect ==="
rg -n "namespaceResourcesToCollect" --type go -A10
echo -e "\n=== How GroupResource is processed in inspect code ==="
# Look at the inspect package to understand how these resources are processed
rg -n "GroupResource" pkg/cli/admin/inspect/ --type go -B2 -A5
echo -e "\n=== Check for dot-notation patterns in resource names ==="
# Search for other resources using dot notation
rg -n 'Resource:\s*"[^"]+\.[^"]+"' --type go
echo -e "\n=== Look for kubectl/oc resource name conventions ==="
# Search for comments or documentation about resource naming
rg -n "networking" pkg/cli/admin/inspect/ --type go -B2 -A2
Length of output: 8012
🏁 Script executed:
#!/bin/bash
# Check if there are other files that use dot-notation for resource specifications
echo "=== Search for dot-notation in resource specifications across codebase ==="
rg -n '"[a-z]+\.[a-z]+"' --type go | grep -i "resource" | head -20
echo -e "\n=== Look for examples in test files ==="
rg -n "ingresses\.networking" --type go
echo -e "\n=== Check how 'all' special resource works and other patterns ==="
rg -n 'Resource.*"all"' pkg/cli/admin/inspect/ --type go -B3 -A3
echo -e "\n=== Search for networking.k8s.io references ==="
rg -n "networking\.k8s\.io" --type go | head -15
echo -e "\n=== Look for examples of servicemonitors or other CRDs ==="
rg -n "servicemonitors" pkg/cli/admin/inspect/ --type go -C3
Length of output: 1632
🏁 Script executed:
#!/bin/bash
# Look for test files that might show proper GroupResource usage
echo "=== Search for test files with GroupResource examples ==="
fd -e go test | xargs rg -l "GroupResource" | head -5
echo -e "\n=== Check test examples ==="
rg -n "schema\.GroupResource" --type go -A2 | head -40
echo -e "\n=== Look for how networkpolicies are handled (they're also in networking.k8s.io) ==="
rg -n "networkpolicies" --type go -C2
echo -e "\n=== Check if there are any imports or usages of networking API ==="
rg -n "networking" --type go | grep -i "import\|k8s.io\|api" | head -20
Length of output: 3626
@ardaguclu Great question! After investigating the codebase, I found that {Resource: "ingresses.networking"} is inconsistent with the established pattern in this file.
Key Finding
Look at line 29 in the same function:
{Resource: "networkpolicies"},NetworkPolicy is also from the networking.k8s.io API group, yet it's specified as just {Resource: "networkpolicies"} without any group specification. This proves that simple resource names work fine for networking.k8s.io resources through kubectl/oc's resource discovery mechanism.
How the Code Works
In pkg/cli/admin/inspect/util.go:90-94, the groupResourceToInfos function constructs the dot-notation only when the Group field is populated:
resourceString := ref.Resource
if len(ref.Group) > 0 {
resourceString = fmt.Sprintf("%s.%s", resourceString, ref.Group)
}Recommendation
To match the existing pattern and be consistent with networkpolicies, change line 34 to:
{Resource: "ingresses"},Alternatively, if explicit group specification is preferred:
{Group: "networking.k8s.io", Resource: "ingresses"},The current {Resource: "ingresses.networking"} hardcodes what should be constructed by the code, breaking the established pattern where all other entries use simple resource names.
🧬 Code Graph Analysis Results
[File: pkg/cli/admin/inspect/namespace.go, Lines 19-36]
func namespaceResourcesToCollect() []schema.GroupResource {
return []schema.GroupResource{
// this is actually a group which collects most useful things
{Resource: "all"},
{Resource: "configmaps"},
{Resource: "egressfirewalls"},
{Resource: "egressqoses"},
{Resource: "events"},
{Resource: "endpoints"},
{Resource: "endpointslices"},
{Resource: "networkpolicies"},
{Resource: "persistentvolumeclaims"},
{Resource: "poddisruptionbudgets"},
{Resource: "secrets"},
{Resource: "servicemonitors"},
{Resource: "userdefinednetworks"},
{Resource: "ingresses.networking"},
}
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API group must be specified because there is more than one ingress resource in OpenShift clusters and in some old versions, the priorities were different, so API group must be explicitly specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@oarribas maybe you can just use the form with group and resource as suggested by the bot.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ardaguclu , as @palonsoro commented, it was added as there are more than one ingress resource with different APIs.
On the other hand, I haven't see the form with group used anywhere else, so not sure if the suggested one is the correct way. If it is, I can change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please try ingresses.networking.k8s.io whether it works or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checking the GroupResource type, it looks like the right way to go is the first suggested by the bot, as the GropuResource has the Group for API group and the Resource for the resource name.
@oarribas maybe you didn't see it because it was not required for the collected API groups. But if you do it like the bot suggests (with group and resource) and it fails, that would be a bug by itself, because the GroupResource type is something coming from upstream Kubernetes that should be usable in the way expected by upstream alongside oc.
fe9e917 to
785bc65
Compare
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: oarribas The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@oarribas: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Including the K8s
ingress.networking.k8s.ioresource in namespace inspect collections could help to identify if the K8s ingress is in use.It shouldn't increase the size of the inspect or must-gather a lot, as the resource shouldn't be too big.