Skip to content

Conversation

@JoaoPedroPP
Copy link
Contributor

@JoaoPedroPP JoaoPedroPP commented Jan 26, 2026

  • Deploy correct secret database for kite instance in each cluster.
  • Deploy Kite in kflux-rhel-p01

Jira KFLUXUI-988
Jira KFLUXUI-901

@openshift-ci
Copy link

openshift-ci bot commented Jan 26, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@github-actions
Copy link
Contributor

🤖 Gemini AI Assistant Available

Hi @JoaoPedroPP! I'm here to help with your pull request. You can interact with me using the following commands:

Available Commands

  • @gemini-cli /review - Request a comprehensive code review

    • Example: @gemini-cli /review Please focus on security and performance
  • @gemini-cli <your question> - Ask me anything about the codebase

    • Example: @gemini-cli How can I improve this function?
    • Example: @gemini-cli What are the best practices for error handling here?

How to Use

  1. Simply type one of the commands above in a comment on this PR
  2. I'll analyze your code and provide detailed feedback
  3. You can track my progress in the workflow logs

Permissions

Only OWNER, MEMBER, or COLLABORATOR users can trigger my responses. This ensures secure and appropriate usage.


This message was automatically added to help you get started with the Gemini AI assistant. Feel free to delete this comment if you don't need assistance.

@github-actions
Copy link
Contributor

🤖 Hi @JoaoPedroPP, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

Copy link
Member

@rrosatti rrosatti left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@StanislavJochman StanislavJochman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@JoaoPedroPP
Copy link
Contributor Author

/lgtm
/approve

@openshift-ci
Copy link

openshift-ci bot commented Jan 29, 2026

@JoaoPedroPP: you cannot LGTM your own PR.

Details

In response to this:

/lgtm
/approve

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@StanislavJochman
Copy link
Contributor

/lgtm
/approve

@StanislavJochman
Copy link
Contributor

@tisutisu can you approve this pls?

@tisutisu
Copy link
Member

@tisutisu can you approve this pls?

Would ping someone from konflux infra team to take a look. @hugares could you please help approve here?

@Katka92
Copy link
Contributor

Katka92 commented Jan 30, 2026

/lgtm
/approved

@Katka92
Copy link
Contributor

Katka92 commented Jan 30, 2026

/approve

Copy link
Contributor

@hugares hugares left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The pipeline failed because the end-to-end tests could not deploy required infrastructure components due to exceeding Docker Hub's unauthenticated pull rate limit when fetching a Helm chart.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e job failed because it encountered HTTP 429 "Too Many Requests" errors when attempting to pull the bitnamichartssecure/postgresql Helm chart from Docker Hub. This indicates that the CI environment exceeded the unauthenticated pull rate limit imposed by Docker Hub.

Contributing Factors

The failure to pull the Helm chart prevented the successful deployment of the PostgreSQL database, which is a critical infrastructure component for the end-to-end tests. Consequently, the test suite could not proceed. The applications_argoproj.json artifact confirms that the 'postgres' ArgoCD Application is in a ComparisonError state due to this Docker Hub rate limiting issue.

Impact

This infrastructure-level failure prevented the execution of the end-to-end tests entirely. The pipeline could not proceed beyond the infrastructure setup phase, leading to a complete job failure without any specific test cases being evaluated.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The CI job exceeded the unauthenticated pull rate limit for Docker Hub while trying to pull the PostgreSQL Helm chart. This prevented the successful deployment of required infrastructure components, causing the end-to-end tests to fail.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 660
postgres failed with:
[{"lastTransitionTime":"2026-02-02T16:03:19Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = error pulling OCI chart: failed to pull OCI chart: failed to get command args to log: 'helm pull oci://registry-1.docker.io/bitnamichartssecure/postgresql --version 17.0.2 --destination /tmp/998a7dbe-51c0-4080-899f-2eb6ebdc777c' failed exit status 1: Error: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/bitnamichartssecure/postgresql/manifests/sha256:6761aad2d5e01b5462284aa31f1c58e676deee8b8266491f95b22ecfd0f1113d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 743
postgres failed with:
[{"lastTransitionTime":"2026-02-02T16:15:43Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = error pulling OCI chart: failed to pull OCI chart: failed to get command args to log: 'helm pull oci://registry-1.docker.io/bitnamichartssecure/postgresql --version 17.0.2 --destination /tmp/66f0c198-e2ab-430a-a72a-1b48022802fc' failed exit status 1: Error: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/bitnamichartssecure/postgresql/manifests/sha256:6761aad2d5e01b5462284aa31f1c58e676deee8b8266491f95b22ecfd0f1113d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 753
Error: error when bootstrapping cluster: reached maximum number of attempts (2). error: exit status 1
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 754
make: *** [Makefile:25: ci/test/e2e] Error 1

Analysis powered by prow-failure-analysis | Build: 2018351122808311808

@JoaoPedroPP
Copy link
Contributor Author

/test appstudio-e2e-tests

@JoaoPedroPP
Copy link
Contributor Author

/retest

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The Prow job timed out due to the redhat-appstudio-e2e E2E tests exceeding their allocated execution time.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step in the Prow job pull-ci-redhat-appstudio-infra-deployments-main-appstudio-e2e-tests failed because it exceeded the maximum allowed execution time of 2 hours. The process was terminated after the timeout and a subsequent grace period.

Contributing Factors

While the exact cause of the extended execution time is not pinpointed, the additional_context reveals that the Tekton Addon component within the TektonConfig is not in a ready state and requires reconciliation. This could indicate underlying issues with Tekton's ability to provision or manage resources necessary for the E2E tests, potentially leading to performance degradation or hangs. Additionally, the tekton-pipelines-webhook and tekton-operator-proxy-webhook have Horizontal Pod Autoscalers configured with aggressive scaling targets (100% CPU/memory), which could lead to resource contention under load.

Impact

The timeout prevented the completion of the end-to-end tests, meaning the quality and correctness of the AppStudio infrastructure deployment could not be validated for this specific Prow job execution. This blocks the integration of changes via PR 10197.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The e2e tests in the appstudio-e2e-tests/redhat-appstudio-e2e job exceeded the allocated timeout, preventing the test execution from completing.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step_log.txt line 790
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-03T16:45:26Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step_log.txt line 792
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T16:45:41Z"}

Analysis powered by prow-failure-analysis | Build: 2018696580093186048

Copy link
Contributor

@StanislavJochman StanislavJochman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci openshift-ci bot added the lgtm label Feb 3, 2026
@StanislavJochman
Copy link
Contributor

/approve
/lgtm

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The end-to-end tests for AppStudio failed due to a timeout, likely caused by underlying infrastructure or configuration issues preventing critical components like Tekton Addons from reaching a ready state.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step timed out after 2 hours, as indicated by the Prow job logs. The process did not complete within the allocated time and was terminated after a grace period.

Contributing Factors

Analysis of the additional_context reveals several potential contributing factors:

  • Several Argo CD Applications (application-api, build-service, cert-manager, crossplane-control-plane) are in an 'OutOfSync' state.
  • The tektonconfigs.json artifact indicates that TektonAddon reconciliation is failing with an error stating 'Components not in ready state: TektonAddon: reconcile again and proceed', which prevents Tekton components from becoming ready.
  • must-gather logs show errors related to missing namespaces (e.g., assisted-installer, openshift-pipelines) and issues gathering data from pods.
  • The tektonaddons.json artifact shows that the 'addon' TektonAddon resource has InstallerSetReady and Ready conditions in a 'False' state due to issues with 'openshift console resources' and a non-ready tkn-cli-serve deployment.

These factors suggest a systemic issue with the cluster's ability to provision or manage essential services, which is likely causing the e2e tests to stall or fail to progress, ultimately leading to the timeout.

Impact

The timeout in the appstudio-e2e-tests/redhat-appstudio-e2e step prevented the completion of the Prow job. This blocked the verification of infrastructure deployments and configurations for AppStudio, potentially delaying the integration of changes from PR #10197.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests in the appstudio-e2e-tests/redhat-appstudio-e2e step exceeded the allocated timeout of 2 hours. This suggests that one or more tests are taking an excessively long time to complete, potentially due to performance issues, resource contention, or an infinite loop in the test execution.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1670
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-03T20:41:08Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1675
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T20:41:23Z"}

Analysis powered by prow-failure-analysis | Build: 2018754273268994048

@JoaoPedroPP
Copy link
Contributor Author

/retest

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

Pipeline failed due to a network error preventing the download of a Helm chart, which blocked the build of the vector-kubearchive-log-collector component.

📋 Technical Details

Immediate Cause

The kustomize build command for the vector-kubearchive-log-collector component failed because it encountered a network error, specifically a "connection reset by peer," while attempting to download a Helm chart from https://grafana.github.io/helm-charts.

Contributing Factors

While the additional_context reveals various Kubernetes and CI/CD configurations (e.g., ArgoCD ApplicationSets, Tekton configurations, Kueue setup), none of these directly explain the transient network issue encountered during the Helm chart download. The failure appears to be an isolated network connectivity problem rather than a systemic configuration issue.

Impact

The failure of the kustomize build command directly prevented the successful completion of the appstudio-e2e-tests/redhat-appstudio-e2e step. This consequently led to the overall Prow job exceeding its allocated time limit and terminating, thus blocking the pipeline from proceeding and validating the changes in the PR.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The kustomize build command for the vector-kubearchive-log-collector component failed due to a network issue (connection reset by peer) when attempting to download a Helm chart, leading to a timeout and ultimately the overall job timeout.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:230
[{"lastTransitionTime":"2026-02-05T19:12:18Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = 'kustomize build \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development --enable-helm --helm-kube-version 1.30 --helm-api-versions acme.cert-manager.io/v1 --helm-api-versions acme.cert-manager.io/v1/Order --helm-api-versions admissionregistration.k8s.io/v1 --helm-api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicyBinding --helm-api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1beta1 --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicyBinding --helm-api-versions apiextensions.crossplane.io/v1 --helm-api-versions apiextensions.crossplane.io/v1/CompositeResourceDefinition --helm-api-versions apiextensions.crossplane.io/v1/Composition --helm-api-versions apiextensions.crossplane.io/v1/CompositionRevision --helm-api-versions apiextensions.crossplane.io/v1alpha1 --helm-api-versions apiextensions.crossplane.io/v1alpha1/ManagedResourceActivationPolicy --helm-api-versions apiextensions.crossplane.io/v1alpha1/ManagedResourceDefinition --helm-api-versions apiextensions.crossplane.io/v1alpha1/Usage --helm-api-versions apiextensions.crossplane.io/v1beta1 --helm-api-versions apiextensions.crossplane.io/v1beta1/EnvironmentConfig --helm-api-versions apiextensions.crossplane.io/v1beta1/Usage --helm-api-versions apiextensions.crossplane.io/v2 --helm-api-versions apiextensions.crossplane.io/v2/CompositeResourceDefinition --helm-api-versions apiextensions.k8s.io/v1 --helm-api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --helm-api-versions apiregistration.k8s.io/v1 --helm-api-versions apiregistration.k8s.io/v1/APIService --helm-api-versions apiserver.openshift.io/v1 --helm-api-versions apiserver.openshift.io/v1/APIRequestCount --helm-api-versions apps.openshift.io/v1 --helm-api-versions apps.openshift.io/v1/DeploymentConfig --helm-api-versions apps/v1 --helm-api-versions apps/v1/ControllerRevision --helm-api-versions apps/v1/DaemonSet --helm-api-versions apps/v1/Deployment --helm-api-versions apps/v1/ReplicaSet --helm-api-versions apps/v1/StatefulSet --helm-api-versions appstudio.redhat.com/v1alpha1 --helm-api-versions appstudio.redhat.com/v1alpha1/Application --helm-api-versions appstudio.redhat.com/v1alpha1/Component --helm-api-versions appstudio.redhat.com/v1alpha1/ComponentDetectionQuery --helm-api-versions appstudio.redhat.com/v1alpha1/DependencyUpdateCheck --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTarget --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClaim --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClass --helm-api-versions appstudio.redhat.com/v1alpha1/EnterpriseContractPolicy --helm-api-versions appstudio.redhat.com/v1alpha1/Environment --helm-api-versions appstudio.redhat.com/v1alpha1/ImageRepository --helm-api-versions appstudio.redhat.com/v1alpha1/InternalRequest --helm-api-versions appstudio.redhat.com/v1alpha1/InternalServicesConfig --helm-api-versions appstudio.redhat.com/v1alpha1/PromotionRun --helm-api-versions appstudio.redhat.com/v1alpha1/Release --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlan --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlanAdmission --helm-api-versions appstudio.redhat.com/v1alpha1/ReleaseServiceConfig --helm-api-versions appstudio.redhat.com/v1alpha1/Snapshot --helm-api-versions appstudio.redhat.com/v1alpha1/SnapshotEnvironmentBinding --helm-api-versions argoproj.io/v1alpha1 --helm-api-versions argoproj.io/v1alpha1/AnalysisRun --helm-api-versions argoproj.io/v1alpha1/AnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/AppProject --helm-api-versions argoproj.io/v1alpha1/Application --helm-api-versions argoproj.io/v1alpha1/ApplicationSet --helm-api-versions argoproj.io/v1alpha1/ArgoCD --helm-api-versions argoproj.io/v1alpha1/ClusterAnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/Experiment --helm-api-versions argoproj.io/v1alpha1/NotificationsConfiguration --helm-api-versions argoproj.io/v1alpha1/Rollout --helm-api-versions argoproj.io/v1alpha1/RolloutManager --helm-api-versions argoproj.io/v1beta1 --helm-api-versions argoproj.io/v1beta1/ArgoCD --helm-api-versions authorization.openshift.io/v1 --helm-api-versions authorization.openshift.io/v1/RoleBindingRestriction --helm-api-versions autoscaling.openshift.io/v1 --helm-api-versions autoscaling.openshift.io/v1/ClusterAutoscaler --helm-api-versions autoscaling.openshift.io/v1beta1 --helm-api-versions autoscaling.openshift.io/v1beta1/MachineAutoscaler --helm-api-versions autoscaling/v1 --helm-api-versions autoscaling/v1/HorizontalPodAutoscaler --helm-api-versions autoscaling/v2 --helm-api-versions autoscaling/v2/HorizontalPodAutoscaler --helm-api-versions batch/v1 --helm-api-versions batch/v1/CronJob --helm-api-versions batch/v1/Job --helm-api-versions build.openshift.io/v1 --helm-api-versions build.openshift.io/v1/Build --helm-api-versions build.openshift.io/v1/BuildConfig --helm-api-versions cert-manager.io/v1 --helm-api-versions cert-manager.io/v1/CertificateRequest --helm-api-versions cert-manager.io/v1/Issuer --helm-api-versions certificates.k8s.io/v1 --helm-api-versions certificates.k8s.io/v1/CertificateSigningRequest --helm-api-versions ci.openshift.org/v1alpha1 --helm-api-versions ci.openshift.org/v1alpha1/TestPlatformCluster --helm-api-versions ci.openshift.org/v1alpha1/XTestPlatformCluster --helm-api-versions cloud.network.openshift.io/v1 --helm-api-versions cloud.network.openshift.io/v1/CloudPrivateIPConfig --helm-api-versions cloudcredential.openshift.io/v1 --helm-api-versions cloudcredential.openshift.io/v1/CredentialsRequest --helm-api-versions config.openshift.io/v1 --helm-api-versions config.openshift.io/v1/APIServer --helm-api-versions config.openshift.io/v1/Authentication --helm-api-versions config.openshift.io/v1/Build --helm-api-versions config.openshift.io/v1/ClusterOperator --helm-api-versions config.openshift.io/v1/ClusterVersion --helm-api-versions config.openshift.io/v1/Console --helm-api-versions config.openshift.io/v1/DNS --helm-api-versions config.openshift.io/v1/FeatureGate --helm-api-versions config.openshift.io/v1/Image --helm-api-versions config.openshift.io/v1/ImageContentPolicy --helm-api-versions config.openshift.io/v1/ImageDigestMirrorSet --helm-api-versions config.openshift.io/v1/ImageTagMirrorSet --helm-api-versions config.openshift.io/v1/Infrastructure --helm-api-versions config.openshift.io/v1/Ingress --helm-api-versions config.openshift.io/v1/Network --helm-api-versions config.openshift.io/v1/Node --helm-api-versions config.openshift.io/v1/OAuth --helm-api-versions config.openshift.io/v1/OperatorHub --helm-api-versions config.openshift.io/v1/Project --helm-api-versions config.openshift.io/v1/Proxy --helm-api-versions config.openshift.io/v1/Scheduler --helm-api-versions console.openshift.io/v1 --helm-api-versions console.openshift.io/v1/ConsoleCLIDownload --helm-api-versions console.openshift.io/v1/ConsoleExternalLogLink --helm-api-versions console.openshift.io/v1/ConsoleLink --helm-api-versions console.openshift.io/v1/ConsoleNotification --helm-api-versions console.openshift.io/v1/ConsolePlugin --helm-api-versions console.openshift.io/v1/ConsoleQuickStart --helm-api-versions console.openshift.io/v1/ConsoleSample --helm-api-versions console.openshift.io/v1/ConsoleYAMLSample --helm-api-versions console.openshift.io/v1alpha1 --helm-api-versions console.openshift.io/v1alpha1/ConsolePlugin --helm-api-versions controlplane.operator.openshift.io/v1alpha1 --helm-api-versions controlplane.operator.openshift.io/v1alpha1/PodNetworkConnectivityCheck --helm-api-versions coordination.k8s.io/v1 --helm-api-versions coordination.k8s.io/v1/Lease --helm-api-versions dex.coreos.com/v1 --helm-api-versions dex.coreos.com/v1/AuthCode --helm-api-versions dex.coreos.com/v1/AuthRequest --helm-api-versions dex.coreos.com/v1/Connector --helm-api-versions dex.coreos.com/v1/DeviceRequest --helm-api-versions dex.coreos.com/v1/DeviceToken --helm-api-versions dex.coreos.com/v1/OAuth2Client --helm-api-versions dex.coreos.com/v1/OfflineSessions --helm-api-versions dex.coreos.com/v1/Password --helm-api-versions dex.coreos.com/v1/RefreshToken --helm-api-versions dex.coreos.com/v1/SigningKey --helm-api-versions discovery.k8s.io/v1 --helm-api-versions discovery.k8s.io/v1/EndpointSlice --helm-api-versions eaas.konflux-ci.dev/v1alpha1 --helm-api-versions eaas.konflux-ci.dev/v1alpha1/Namespace --helm-api-versions eaas.konflux-ci.dev/v1alpha1/XNamespace --helm-api-versions eventing.knative.dev/v1 --helm-api-versions eventing.knative.dev/v1/Broker --helm-api-versions eventing.knative.dev/v1/Trigger --helm-api-versions eventing.knative.dev/v1alpha1 --helm-api-versions eventing.knative.dev/v1alpha1/EventPolicy --helm-api-versions eventing.knative.dev/v1beta1 --helm-api-versions eventing.knative.dev/v1beta1/EventType --helm-api-versions eventing.knative.dev/v1beta2 --helm-api-versions eventing.knative.dev/v1beta2/EventType --helm-api-versions eventing.knative.dev/v1beta3 --helm-api-versions eventing.knative.dev/v1beta3/EventType --helm-api-versions events.k8s.io/v1 --helm-api-versions events.k8s.io/v1/Event --helm-api-versions flowcontrol.apiserver.k8s.io/v1 --helm-api-versions flowcontrol.apiserver.k8s.io/v1/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3 --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration --helm-api-versions flows.knative.dev/v1 --helm-api-versions flows.knative.dev/v1/Parallel --helm-api-versions flows.knative.dev/v1/Sequence --helm-api-versions gotemplating.fn.crossplane.io/v1beta1 --helm-api-versions gotemplating.fn.crossplane.io/v1beta1/GoTemplate --helm-api-versions helm.openshift.io/v1beta1 --helm-api-versions helm.openshift.io/v1beta1/HelmChartRepository --helm-api-versions helm.openshift.io/v1beta1/ProjectHelmChartRepository --helm-api-versions image.openshift.io/v1 --helm-api-versions image.openshift.io/v1/Image --helm-api-versions image.openshift.io/v1/ImageStream --helm-api-versions imageregistry.operator.openshift.io/v1 --helm-api-versions imageregistry.operator.openshift.io/v1/Config --helm-api-versions imageregistry.operator.openshift.io/v1/ImagePruner --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5 --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3RemediationTemplate --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1 --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3RemediationTemplate --helm-api-versions ingress.operator.openshift.io/v1 --helm-api-versions ingress.operator.openshift.io/v1/DNSRecord --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1 --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddressClaim --helm-api-versions ipam.cluster.x-k8s.io/v1beta1 --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddressClaim --helm-api-versions k8s.cni.cncf.io/v1 --helm-api-versions k8s.cni.cncf.io/v1/NetworkAttachmentDefinition --helm-api-versions k8s.ovn.org/v1 --helm-api-versions k8s.ovn.org/v1/AdminPolicyBasedExternalRoute --helm-api-versions k8s.ovn.org/v1/EgressFirewall --helm-api-versions k8s.ovn.org/v1/EgressIP --helm-api-versions k8s.ovn.org/v1/EgressQoS --helm-api-versions k8s.ovn.org/v1/EgressService --helm-api-versions kubearchive.org/v1 --helm-api-versions kubearchive.org/v1/ClusterKubeArchiveConfig --helm-api-versions kubearchive.org/v1/ClusterVacuumConfig --helm-api-versions kubearchive.org/v1/KubeArchiveConfig --helm-api-versions kubearchive.org/v1/NamespaceVacuumConfig --helm-api-versions kubearchive.org/v1/SinkFilter --helm-api-versions kubernetes.crossplane.io/v1alpha1 --helm-api-versions kubernetes.crossplane.io/v1alpha1/Object --helm-api-versions kubernetes.crossplane.io/v1alpha1/ObservedObjectCollection --helm-api-versions kubernetes.crossplane.io/v1alpha1/ProviderConfig --helm-api-versions kubernetes.crossplane.io/v1alpha1/ProviderConfigUsage --helm-api-versions kubernetes.crossplane.io/v1alpha2 --helm-api-versions kubernetes.crossplane.io/v1alpha2/Object --helm-api-versions kubernetes.m.crossplane.io/v1alpha1 --helm-api-versions kubernetes.m.crossplane.io/v1alpha1/ClusterProviderConfig --helm-api-versions kubernetes.m.crossplane.io/v1alpha1/Object --helm-api-versions kubernetes.m.crossplane.io/v1alpha1/ObservedObjectCollection --helm-api-versions kubernetes.m.crossplane.io/v1alpha1/ProviderConfig --helm-api-versions kubernetes.m.crossplane.io/v1alpha1/ProviderConfigUsage --helm-api-versions machine.openshift.io/v1 --helm-api-versions machine.openshift.io/v1/ControlPlaneMachineSet --helm-api-versions machine.openshift.io/v1beta1 --helm-api-versions machine.openshift.io/v1beta1/Machine --helm-api-versions machine.openshift.io/v1beta1/MachineHealthCheck --helm-api-versions machine.openshift.io/v1beta1/MachineSet --helm-api-versions machineconfiguration.openshift.io/v1 --helm-api-versions machineconfiguration.openshift.io/v1/ContainerRuntimeConfig --helm-api-versions machineconfiguration.openshift.io/v1/ControllerConfig --helm-api-versions machineconfiguration.openshift.io/v1/KubeletConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfigPool --helm-api-versions messaging.knative.dev/v1 --helm-api-versions messaging.knative.dev/v1/Channel --helm-api-versions messaging.knative.dev/v1/InMemoryChannel --helm-api-versions messaging.knative.dev/v1/Subscription --helm-api-versions metal3.io/v1alpha1 --helm-api-versions metal3.io/v1alpha1/BMCEventSubscription --helm-api-versions metal3.io/v1alpha1/BareMetalHost --helm-api-versions metal3.io/v1alpha1/DataImage --helm-api-versions metal3.io/v1alpha1/FirmwareSchema --helm-api-versions metal3.io/v1alpha1/HardwareData --helm-api-versions metal3.io/v1alpha1/HostFirmwareComponents --helm-api-versions metal3.io/v1alpha1/HostFirmwareSettings --helm-api-versions metal3.io/v1alpha1/PreprovisioningImage --helm-api-versions metal3.io/v1alpha1/Provisioning --helm-api-versions migration.k8s.io/v1alpha1 --helm-api-versions migration.k8s.io/v1alpha1/StorageState --helm-api-versions migration.k8s.io/v1alpha1/StorageVersionMigration --helm-api-versions monitoring.coreos.com/v1 --helm-api-versions monitoring.coreos.com/v1/Alertmanager --helm-api-versions monitoring.coreos.com/v1/PodMonitor --helm-api-versions monitoring.coreos.com/v1/Probe --helm-api-versions monitoring.coreos.com/v1/Prometheus --helm-api-versions monitoring.coreos.com/v1/PrometheusRule --helm-api-versions monitoring.coreos.com/v1/ServiceMonitor --helm-api-versions monitoring.coreos.com/v1/ThanosRuler --helm-api-versions monitoring.coreos.com/v1alpha1 --helm-api-versions monitoring.coreos.com/v1alpha1/AlertmanagerConfig --helm-api-versions monitoring.coreos.com/v1beta1 --helm-api-versions monitoring.coreos.com/v1beta1/AlertmanagerConfig --helm-api-versions monitoring.openshift.io/v1 --helm-api-versions monitoring.openshift.io/v1/AlertRelabelConfig --helm-api-versions monitoring.openshift.io/v1/AlertingRule --helm-api-versions network.operator.openshift.io/v1 --helm-api-versions network.operator.openshift.io/v1/EgressRouter --helm-api-versions network.operator.openshift.io/v1/OperatorPKI --helm-api-versions networking.k8s.io/v1 --helm-api-versions networking.k8s.io/v1/Ingress --helm-api-versions networking.k8s.io/v1/IngressClass --helm-api-versions networking.k8s.io/v1/NetworkPolicy --helm-api-versions node.k8s.io/v1 --helm-api-versions node.k8s.io/v1/RuntimeClass --helm-api-versions oauth.openshift.io/v1 --helm-api-versions oauth.openshift.io/v1/OAuthAccessToken --helm-api-versions oauth.openshift.io/v1/OAuthAuthorizeToken --helm-api-versions oauth.openshift.io/v1/OAuthClient --helm-api-versions oauth.openshift.io/v1/OAuthClientAuthorization --helm-api-versions oauth.openshift.io/v1/UserOAuthAccessToken --helm-api-versions operator.openshift.io/v1 --helm-api-versions operator.openshift.io/v1/Authentication --helm-api-versions operator.openshift.io/v1/CSISnapshotController --helm-api-versions operator.openshift.io/v1/CloudCredential --helm-api-versions operator.openshift.io/v1/ClusterCSIDriver --helm-api-versions operator.openshift.io/v1/Config --helm-api-versions operator.openshift.io/v1/Console --helm-api-versions operator.openshift.io/v1/DNS --helm-api-versions operator.openshift.io/v1/Etcd --helm-api-versions operator.openshift.io/v1/IngressController --helm-api-versions operator.openshift.io/v1/InsightsOperator --helm-api-versions operator.openshift.io/v1/KubeAPIServer --helm-api-versions operator.openshift.io/v1/KubeControllerManager --helm-api-versions operator.openshift.io/v1/KubeScheduler --helm-api-versions operator.openshift.io/v1/KubeStorageVersionMigrator --helm-api-versions operator.openshift.io/v1/MachineConfiguration --helm-api-versions operator.openshift.io/v1/Network --helm-api-versions operator.openshift.io/v1/OpenShiftAPIServer --helm-api-versions operator.openshift.io/v1/OpenShiftControllerManager --helm-api-versions operator.openshift.io/v1/ServiceCA --helm-api-versions operator.openshift.io/v1/Storage --helm-api-versions operator.openshift.io/v1alpha1 --helm-api-versions operator.openshift.io/v1alpha1/ImageContentSourcePolicy --helm-api-versions operator.openshift.io/v1alpha1/IstioCSR --helm-api-versions operators.coreos.com/v1 --helm-api-versions operators.coreos.com/v1/OLMConfig --helm-api-versions operators.coreos.com/v1/Operator --helm-api-versions operators.coreos.com/v1/OperatorCondition --helm-api-versions operators.coreos.com/v1/OperatorGroup --helm-api-versions operators.coreos.com/v1alpha1 --helm-api-versions operators.coreos.com/v1alpha1/CatalogSource --helm-api-versions operators.coreos.com/v1alpha1/ClusterServiceVersion --helm-api-versions operators.coreos.com/v1alpha1/InstallPlan --helm-api-versions operators.coreos.com/v1alpha1/Subscription --helm-api-versions operators.coreos.com/v1alpha2 --helm-api-versions operators.coreos.com/v1alpha2/OperatorGroup --helm-api-versions operators.coreos.com/v2 --helm-api-versions operators.coreos.com/v2/OperatorCondition --helm-api-versions ops.crossplane.io/v1alpha1 --helm-api-versions ops.crossplane.io/v1alpha1/CronOperation --helm-api-versions ops.crossplane.io/v1alpha1/Operation --helm-api-versions ops.crossplane.io/v1alpha1/WatchOperation --helm-api-versions performance.openshift.io/v1 --helm-api-versions performance.openshift.io/v1/PerformanceProfile --helm-api-versions performance.openshift.io/v1alpha1 --helm-api-versions performance.openshift.io/v1alpha1/PerformanceProfile --helm-api-versions performance.openshift.io/v2 --helm-api-versions performance.openshift.io/v2/PerformanceProfile --helm-api-versions pipelines.openshift.io/v1alpha1 --helm-api-versions pipelines.openshift.io/v1alpha1/GitopsService --helm-api-versions pkg.crossplane.io/v1 --helm-api-versions pkg.crossplane.io/v1/Configuration --helm-api-versions pkg.crossplane.io/v1/ConfigurationRevision --helm-api-versions pkg.crossplane.io/v1/Function --helm-api-versions pkg.crossplane.io/v1/FunctionRevision --helm-api-versions pkg.crossplane.io/v1/Provider --helm-api-versions pkg.crossplane.io/v1/ProviderRevision --helm-api-versions pkg.crossplane.io/v1beta1 --helm-api-versions pkg.crossplane.io/v1beta1/DeploymentRuntimeConfig --helm-api-versions pkg.crossplane.io/v1beta1/Function --helm-api-versions pkg.crossplane.io/v1beta1/FunctionRevision --helm-api-versions pkg.crossplane.io/v1beta1/ImageConfig --helm-api-versions pkg.crossplane.io/v1beta1/Lock --helm-api-versions policy.networking.k8s.io/v1alpha1 --helm-api-versions policy.networking.k8s.io/v1alpha1/AdminNetworkPolicy --helm-api-versions policy.networking.k8s.io/v1alpha1/BaselineAdminNetworkPolicy --helm-api-versions policy/v1 --helm-api-versions policy/v1/PodDisruptionBudget --helm-api-versions projctl.konflux.dev/v1beta1 --helm-api-versions projctl.konflux.dev/v1beta1/Project --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStream --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStreamTemplate --helm-api-versions project.openshift.io/v1 --helm-api-versions project.openshift.io/v1/Project --helm-api-versions protection.crossplane.io/v1beta1 --helm-api-versions protection.crossplane.io/v1beta1/ClusterUsage --helm-api-versions protection.crossplane.io/v1beta1/Usage --helm-api-versions pt.fn.crossplane.io/v1beta1 --helm-api-versions pt.fn.crossplane.io/v1beta1/Resources --helm-api-versions quota.openshift.io/v1 --helm-api-versions quota.openshift.io/v1/ClusterResourceQuota --helm-api-versions rbac.authorization.k8s.io/v1 --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRole --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --helm-api-versions rbac.authorization.k8s.io/v1/Role --helm-api-versions rbac.authorization.k8s.io/v1/RoleBinding --helm-api-versions route.openshift.io/v1 --helm-api-versions route.openshift.io/v1/Route --helm-api-versions samples.operator.openshift.io/v1 --helm-api-versions samples.operator.openshift.io/v1/Config --helm-api-versions scheduling.k8s.io/v1 --helm-api-versions scheduling.k8s.io/v1/PriorityClass --helm-api-versions security.internal.openshift.io/v1 --helm-api-versions security.internal.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1 --helm-api-versions security.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1/SecurityContextConstraints --helm-api-versions sinks.knative.dev/v1alpha1 --helm-api-versions sinks.knative.dev/v1alpha1/JobSink --helm-api-versions snapshot.storage.k8s.io/v1 --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --helm-api-versions sources.knative.dev/v1 --helm-api-versions sources.knative.dev/v1/ApiServerSource --helm-api-versions sources.knative.dev/v1/ContainerSource --helm-api-versions sources.knative.dev/v1/PingSource --helm-api-versions sources.knative.dev/v1/SinkBinding --helm-api-versions sources.knative.dev/v1beta2 --helm-api-versions sources.knative.dev/v1beta2/PingSource --helm-api-versions storage.k8s.io/v1 --helm-api-versions storage.k8s.io/v1/CSIDriver --helm-api-versions storage.k8s.io/v1/CSINode --helm-api-versions storage.k8s.io/v1/CSIStorageCapacity --helm-api-versions storage.k8s.io/v1/StorageClass --helm-api-versions storage.k8s.io/v1/VolumeAttachment --helm-api-versions template.openshift.io/v1 --helm-api-versions template.openshift.io/v1/BrokerTemplateInstance --helm-api-versions template.openshift.io/v1/Template --helm-api-versions template.openshift.io/v1/TemplateInstance --helm-api-versions tempo.grafana.com/v1alpha1 --helm-api-versions tempo.grafana.com/v1alpha1/TempoMonolithic --helm-api-versions tempo.grafana.com/v1alpha1/TempoStack --helm-api-versions tuned.openshift.io/v1 --helm-api-versions tuned.openshift.io/v1/Profile --helm-api-versions tuned.openshift.io/v1/Tuned --helm-api-versions user.openshift.io/v1 --helm-api-versions user.openshift.io/v1/Group --helm-api-versions user.openshift.io/v1/Identity --helm-api-versions user.openshift.io/v1/User --helm-api-versions v1 --helm-api-versions v1/ConfigMap --helm-api-versions v1/Endpoints --helm-api-versions v1/Event --helm-api-versions v1/LimitRange --helm-api-versions v1/Namespace --helm-api-versions v1/Node --helm-api-versions v1/PersistentVolume --helm-api-versions v1/PersistentVolumeClaim --helm-api-versions v1/Pod --helm-api-versions v1/PodTemplate --helm-api-versions v1/ReplicationController --helm-api-versions v1/ResourceQuota --helm-api-versions v1/Secret --helm-api-versions v1/Service --helm-api-versions v1/ServiceAccount --helm-api-versions whereabouts.cni.cncf.io/v1alpha1 --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/IPPool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/NodeSlicePool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/OverlappingRangeIPReservation' failed exit status 1: Error: Error: read tcp 10.129.2.16:49738-\u003e185.199.110.133:443: read: connection reset by peer\n: unable to run: 'helm pull --untar --untardir \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development/charts/grafana-10.4.0 --repo https://grafana.github.io/helm-charts grafana --version 10.4.0' with env=[HELM_CONFIG_HOME=/tmp/kustomize-helm-1190548092/helm HELM_CACHE_HOME=/tmp/kustomize-helm-1190548092/helm/.cache HELM_DATA_HOME=/tmp/kustomize-helm-1190548092/helm/.data] (is 'helm' installed?): exit status 1","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:289
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-05T19:30:05Z"}

Analysis powered by prow-failure-analysis | Build: 2019462554253791232

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The AppStudio E2E test pipeline failed due to persistent network and DNS resolution errors preventing communication with the Kubernetes API server.

📋 Technical Details

Immediate Cause

Several steps within the appstudio-e2e-tests job failed because the must-gather tool and other utilities could not resolve the DNS entry for the Kubernetes API server (api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com). This resulted in "no such host" errors and I/O timeouts when attempting to connect to the API server.

Contributing Factors

The redhat-appstudio-e2e step also failed due to an interrupt signal and a subsequent failed graceful termination. While this could be a separate issue, it occurred in an environment already experiencing network instability, suggesting it might be a secondary effect or exacerbated by the underlying infrastructure problems. The gather-must-gather step specifically logged I/O timeouts in addition to DNS errors, further indicating network connectivity problems.

Impact

These DNS resolution and network connectivity issues prevented essential diagnostic data collection steps (e.g., gather-audit-logs, gather-must-gather) from completing successfully. Crucially, it also blocked the primary redhat-appstudio-e2e test execution step from interacting with the cluster, leading to its termination and the overall failure of the Prow job.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed because it could not resolve the DNS entry for the Kubernetes API server hostname. This indicates a networking or DNS configuration problem within the cluster or the environment where the tool is executing.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 2
[must-gather      ] OUT 2026-02-09T14:40:56.268903868Z Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 15
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 21
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 28
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 39
E0209 14:40:56.336841      49 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 43
error running backup collection: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 57
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 60
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The gather-extra step failed because it could not resolve the DNS for the Kubernetes API server. This is likely due to a network or DNS configuration problem within the cluster or the CI environment.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/log.txt line 3
E0209 14:40:49.155259      28 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/log.txt line 10
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: Network connectivity issues (timeouts and DNS resolution failures) prevented the must-gather tool from communicating with the OpenShift API server.

Logs:

artifacts/appstudio-e2e-tests~gather-must-gather/build-log.txt line 17
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests~gather-must-gather/build-log.txt line 24
E0209 14:40:11.677052      54 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests~gather-must-gather/build-log.txt line 39
error running backup collection: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The mage process was terminated due to an interrupt signal. The process did not exit gracefully within the grace period, leading to a forced termination. This might be due to an external signal, a resource issue within the cluster, or a problem with the test execution environment itself.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1000
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-09T14:34:58Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1002
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1004
build-service-in-cluster-local                      Synced   Degraded
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1005
Waiting 10 seconds for application sync
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1006
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-09T14:35:13Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 1007
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-09T14:35:13Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The DNS lookup for the Kubernetes API server failed, preventing the oc commands from connecting to the cluster. This indicates a potential network or DNS configuration issue within the environment where the job is running.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/steps/redhat-appstudio-gather.log
E0209 14:41:03.208610      34 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/steps/redhat-appstudio-gather.log
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/steps/redhat-appstudio-gather.log
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/steps/redhat-appstudio-gather.log
error running backup collection: Get "https://api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-zfpq9.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2020846720656609280

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

Pipeline failed due to an infrastructure error on the Git server preventing repository cloning.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/konflux-ci-unregister-sprayproxy step failed because it encountered a 500 Internal Server Error from the GitHub Git server while attempting to clone the konflux-ci/e2e-tests.git repository. This indicates an issue with the Git hosting infrastructure itself.

Contributing Factors

The appstudio-e2e-tests/redhat-appstudio-e2e step subsequently timed out. While this timeout is a consequence, the additional context reveals widespread OutOfSync and Degraded states in ArgoCD applications and ApplicationSets, such as pipeline-service and tekton-results-api. These underlying cluster synchronization issues may have contributed to the overall instability and delays, although the Git clone failure was the primary blocker for this specific job execution.

Impact

The failure of the initial Git clone step prevented the execution of subsequent E2E tests. This means the core validation of the AppStudio infrastructure for this PR could not be performed, thus blocking the progress of the pull request through the CI pipeline.

🔍 Evidence

appstudio-e2e-tests/konflux-ci-unregister-sprayproxy

Category: infrastructure
Root Cause: The step failed because the Git server returned an internal server error (500) when attempting to clone the repository. This indicates a temporary issue with the GitHub infrastructure.

Logs:

appstudio-e2e-tests/konflux-ci-unregister-sprayproxy/clone.log line 2
remote: Internal Server Error
appstudio-e2e-tests/konflux-ci-unregister-sprayproxy/clone.log line 3
fatal: unable to access 'https://github.com/konflux-ci/e2e-tests.git/': The requested URL returned error: 500

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests timed out because the ArgoCD application synchronization process took too long, preventing the tests from completing within the allocated time.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 1151
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-09T16:38:19Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 1156
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-09T16:38:34Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 834
pipeline-service-in-cluster-local                   OutOfSync   Degraded
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 834
Waiting 10 seconds for application sync

Analysis powered by prow-failure-analysis | Build: 2020869058924122112

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Unknown

The pipeline failed because the AI model used for analyzing test results exceeded its context window limit, preventing successful test analysis.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed with a litellm.ContextWindowExceededError. This error occurred because the input provided to the language model exceeded the maximum token limit it can process.

Contributing Factors

The same ContextWindowExceededError was reported for the specific E2E test Red Hat App Studio E2E tests.[It] [konflux-demo-suite] Maven project - Default build when Build PipelineRun is created should eventually complete successfully [konflux, upstream-konflux]. While the additional context reveals a complex cluster state with numerous ApplicationSets in an 'OutOfSync' status and some failed PipelineRuns, these are not directly causing the LLM's context window to be exceeded. The failure appears to be an issue with the analysis tool itself, potentially due to the volume or complexity of the output it was asked to process.

Impact

This failure prevented the completion and reporting of E2E test results. The inability to analyze the test outcomes means that the overall health and correctness of the application under test could not be validated, blocking the pipeline's progress.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: unknown
Root Cause: Analysis failed: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: GeminiException - {
"error": {
"code": 400,
"message": "The input token count exceeds the maximum number of tokens allowed 1048576.",
"status": "INVALID_ARGUMENT"
}
}


Analysis powered by prow-failure-analysis | Build: 2020913007411859456

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The E2E test pipeline failed due to an infrastructure issue where Docker Hub's unauthenticated pull rate limit prevented the necessary PostgreSQL Helm chart from being downloaded, blocking the deployment of required services.

📋 Technical Details

Immediate Cause

The immediate cause of the failure was the inability to pull the PostgreSQL Helm chart from Docker Hub. The job's attempt to helm pull oci://registry-1.docker.io/bitnamichartssecure/postgresql --version 17.0.2 resulted in an HTTP 429 "Too Many Requests" error, indicating that the unauthenticated pull rate limit for Docker Hub had been reached.

Contributing Factors

While the direct cause was the rate limit, the reliance on pulling external dependencies during critical pipeline stages can be a vulnerability. Repeated attempts to pull the chart, as seen in the log evidence from different timestamps within the appstudio-e2e-tests/redhat-appstudio-e2e step, suggest that the failure was persistent. The additional_context highlights that the 'postgres' ArgoCD Application was in an 'OutOfSync' state, likely as a consequence of this Helm chart pull failure.

Impact

This infrastructure-level failure directly blocked the execution of the E2E tests. Since the PostgreSQL dependency could not be deployed, the necessary environment for the tests could not be set up, preventing any further progress in the pipeline and rendering the test execution step unsuccessful.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The failure was caused by hitting the unauthenticated pull rate limit for Docker Hub, preventing the necessary Helm chart for the PostgreSQL dependency from being pulled. This blocked the deployment of required services for the E2E tests.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build.log line 705
postgres failed with:
[{"lastTransitionTime":"2026-02-10T12:25:50Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = error pulling OCI chart: failed to pull OCI chart: failed to get command args to log: 'helm pull oci://registry-1.docker.io/bitnamichartssecure/postgresql --version 17.0.2 --destination /tmp/8d0f8a3f-f144-40a9-aeb5-d5f8e6e7a421' failed exit status 1: Error: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/bitnamichartssecure/postgresql/manifests/sha256:6761aad2d5e01b5462284aa31f1c58e676deee8b8266491f95b22ecfd0f1113d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build.log line 779
postgres failed with:
[{"lastTransitionTime":"2026-02-10T12:43:52Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = error pulling OCI chart: failed to pull OCI chart: failed to get command args to log: 'helm pull oci://registry-1.docker.io/bitnamichartssecure/postgresql --version 17.0.2 --destination /tmp/b434fea9-dcb8-4d37-ae03-af5e69394dd7' failed exit status 1: Error: failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/bitnamichartssecure/postgresql/manifests/sha256:6761aad2d5e01b5462284aa31f1c58e676deee8b8266491f95b22ecfd0f1113d: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit","type":"ComparisonError"}]

Analysis powered by prow-failure-analysis | Build: 2021194865378856960

Copy link
Contributor

@StanislavJochman StanislavJochman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci openshift-ci bot added the lgtm label Feb 10, 2026
Deploy correct secret database for kite instance in each cluster.

Jira KFLUXUI-901
@openshift-ci openshift-ci bot added the lgtm label Feb 11, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 11, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hugares, JoaoPedroPP, Katka92, rrosatti, StanislavJochman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Katka92
Copy link
Contributor

Katka92 commented Feb 11, 2026

/lgtm

@openshift-merge-bot openshift-merge-bot bot merged commit 92fd0ad into redhat-appstudio:main Feb 11, 2026
9 checks passed
@JoaoPedroPP JoaoPedroPP deleted the KFLUXUI-901-2 branch February 11, 2026 21:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants