Skip to content

Conversation

@swghosh
Copy link
Member

@swghosh swghosh commented Oct 10, 2025

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
  "gather_command": "/usr/bin/gather",
  "source_dir": "/must-gather"
}

Output:

The generated plan contains YAML manifests for must-gather pods and required resources (namespace, serviceaccount, clusterrolebinding). Suggest how the user can apply the manifest and copy results locally (oc cp / kubectl cp).

Ask the user if they want to apply the plan

  • use the resource_create_or_update tool to apply the manifest
  • alternatively, advise the user to execute oc apply / kubectl apply instead.

Once the must-gather collection is completed, the user may which to cleanup the created resources.

  • use the resources_delete tool to delete the namespace and the clusterrolebinding
  • or, execute cleanup using kubectl delete.
apiVersion: v1
kind: Namespace
metadata:
  name: openshift-must-gather-tn7jzk
spec: {}
status: {}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: must-gather-collector
  namespace: openshift-must-gather-tn7jzk
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: openshift-must-gather-tn7jzk-must-gather-collector
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: must-gather-collector
  namespace: openshift-must-gather-tn7jzk
apiVersion: v1
kind: Pod
metadata:
  generateName: must-gather-
  namespace: openshift-must-gather-tn7jzk
spec:
  containers:
  - command:
    - /usr/bin/gather
    image: registry.redhat.io/openshift4/ose-must-gather:latest
    imagePullPolicy: IfNotPresent
    name: gather
    resources: {}
    volumeMounts:
    - mountPath: /must-gather
      name: must-gather-output
  - command:
    - /bin/bash
    - -c
    - sleep infinity
    image: registry.redhat.io/ubi9/ubi-minimal
    imagePullPolicy: IfNotPresent
    name: wait
    resources: {}
    volumeMounts:
    - mountPath: /must-gather
      name: must-gather-output
  priorityClassName: system-cluster-critical
  restartPolicy: Never
  serviceAccountName: must-gather-collector
  tolerations:
  - operator: Exists
  volumes:
  - emptyDir: {}
    name: must-gather-output
status: {}

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Oct 10, 2025
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested review from Cali0707 and matzew October 10, 2025 19:36
@openshift-ci
Copy link

openshift-ci bot commented Oct 10, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: swghosh
Once this PR has been reviewed and has the lgtm label, please assign ardaguclu for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@swghosh
Copy link
Member Author

swghosh commented Oct 10, 2025

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

@swghosh
Copy link
Member Author

swghosh commented Oct 10, 2025

/cc @Prashanth684 @shivprakashmuley

@Cali0707
Copy link

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

My thoughts are we should probably be making one or more OpenShift specific toolgroups eventually

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-wwt74j -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-wwt74j :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-wwt74j
kubectl delete clusterrolebinding openshift-must-gather-wwt74j-must-gather-collector

apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-wwt74j
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-wwt74j-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-wwt74j
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-wwt74j
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@Prashanth684
Copy link

@harche @ardaguclu referring to #38 (comment), should we move this into pkg/ocp? Given this is also an OpenShift specific tool.

My thoughts are we should probably be making one or more OpenShift specific toolgroups eventually

yes. maybe a pkg/toolsets/ocp/must-gather or equivalent.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-wwt74j -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-wwt74j :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-wwt74j
kubectl delete clusterrolebinding openshift-must-gather-wwt74j-must-gather-collector

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'

Monitor the pod's logs to see when the must-gather process is complete:

kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather

Once the logs indicate completion, copy the results with:

kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait

Finally, clean up the resources with:

kubectl delete ns openshift-must-gather-fzq7f5

kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-fzq7f5
kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-fzq7f5 -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-fzq7f5 :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-fzq7f5
kubectl delete clusterrolebinding openshift-must-gather-fzq7f5-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-vhph8d
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-vhph8d-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-vhph8d
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-vhph8d
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather",
 "timeout": "10m"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-jkbn9p -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-jkbn9p :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-jkbn9p
kubectl delete clusterrolebinding openshift-must-gather-jkbn9p-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-jkbn9p
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-jkbn9p-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-jkbn9p
spec:
 containers:
 - command:
   - /usr/bin/timeout 10m /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 10, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather"
}

Output:

Save the following content to a file (e.g., must-gather-plan.yaml) and apply it with 'kubectl apply -f must-gather-plan.yaml'
Monitor the pod's logs to see when the must-gather process is complete:
kubectl logs -f -n openshift-must-gather-jkbn9p -c gather
Once the logs indicate completion, copy the results with:
kubectl cp -n openshift-must-gather-jkbn9p :/must-gather ./must-gather-output -c wait
Finally, clean up the resources with:
kubectl delete ns openshift-must-gather-jkbn9p
kubectl delete clusterrolebinding openshift-must-gather-jkbn9p-must-gather-collector

apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-jkbn9p
spec: {}
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-jkbn9p-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-jkbn9p
---
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-jkbn9p
spec:
 containers:
 - command:
   - /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

swghosh and others added 2 commits October 11, 2025 02:14
Co-authored-by: Shivprakash Muley <[email protected]>
Signed-off-by: Swarup Ghosh <[email protected]>
Signed-off-by: Swarup Ghosh <[email protected]>
@openshift-ci-robot
Copy link

openshift-ci-robot commented Oct 13, 2025

@swghosh: This pull request references MG-34 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set.

In response to this:

plan_mustgather tool for collecting must-gather(s) from OpenShift cluster

  • generates a pod spec that can either be applied by user manually or used with resource_create_or_update tool
  • alongside pod spec namespace, serviceaccount, clusterrolebinding are generated too
[MCP inspector](https://modelcontextprotocol.io/docs/tools/inspector):

Input (inferred defaults):

{
 "gather_command": "/usr/bin/gather",
 "source_dir": "/must-gather"
}

Output:

The generated plan contains YAML manifests for must-gather pods and required resources (namespace, serviceaccount, clusterrolebinding). Suggest how the user can apply the manifest and copy results locally (oc cp / kubectl cp).

Ask the user if they want to apply the plan

  • use the resource_create_or_update tool to apply the manifest
  • alternatively, advise the user to execute oc apply / kubectl apply instead.

Once the must-gather collection is completed, the user may which to cleanup the created resources.

  • use the resources_delete tool to delete the namespace and the clusterrolebinding
  • or, execute cleanup using kubectl delete.
apiVersion: v1
kind: Namespace
metadata:
 name: openshift-must-gather-tn7jzk
spec: {}
status: {}
apiVersion: v1
kind: ServiceAccount
metadata:
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: openshift-must-gather-tn7jzk-must-gather-collector
roleRef:
 apiGroup: rbac.authorization.k8s.io
 kind: ClusterRole
 name: cluster-admin
subjects:
- kind: ServiceAccount
 name: must-gather-collector
 namespace: openshift-must-gather-tn7jzk
apiVersion: v1
kind: Pod
metadata:
 generateName: must-gather-
 namespace: openshift-must-gather-tn7jzk
spec:
 containers:
 - command:
   - /usr/bin/gather
   image: registry.redhat.io/openshift4/ose-must-gather:latest
   imagePullPolicy: IfNotPresent
   name: gather
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 - command:
   - /bin/bash
   - -c
   - sleep infinity
   image: registry.redhat.io/ubi9/ubi-minimal
   imagePullPolicy: IfNotPresent
   name: wait
   resources: {}
   volumeMounts:
   - mountPath: /must-gather
     name: must-gather-output
 priorityClassName: system-cluster-critical
 restartPolicy: Never
 serviceAccountName: must-gather-collector
 tolerations:
 - operator: Exists
 volumes:
 - emptyDir: {}
   name: must-gather-output
status: {}

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link

openshift-ci bot commented Oct 16, 2025

@swghosh: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/test 2116bf8 link true /test test

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Cali0707 Cali0707 mentioned this pull request Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants