Skip to content

Conversation

@olegbet
Copy link
Contributor

@olegbet olegbet commented Jan 29, 2026

Signed-off-by: obetsun [email protected]

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED

@openshift-ci openshift-ci bot requested review from ggallen and mafh314 January 29, 2026 12:44
@openshift-ci
Copy link

openshift-ci bot commented Jan 29, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: olegbet

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@github-actions
Copy link
Contributor

🤖 Gemini AI Assistant Available

Hi @olegbet! I'm here to help with your pull request. You can interact with me using the following commands:

Available Commands

  • @gemini-cli /review - Request a comprehensive code review

    • Example: @gemini-cli /review Please focus on security and performance
  • @gemini-cli <your question> - Ask me anything about the codebase

    • Example: @gemini-cli How can I improve this function?
    • Example: @gemini-cli What are the best practices for error handling here?

How to Use

  1. Simply type one of the commands above in a comment on this PR
  2. I'll analyze your code and provide detailed feedback
  3. You can track my progress in the workflow logs

Permissions

Only OWNER, MEMBER, or COLLABORATOR users can trigger my responses. This ensures secure and appropriate usage.


This message was automatically added to help you get started with the Gemini AI assistant. Feel free to delete this comment if you don't need assistance.

@github-actions
Copy link
Contributor

🤖 Hi @olegbet, I've received your request, and I'm working on it now! You can track my progress in the logs for more details.

@olegbet olegbet force-pushed the stone-prod-p02_ingester_querier_oomkilled branch from af9039e to 1e8d373 Compare January 29, 2026 13:07
@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

Pipeline failed due to a DNS resolution issue preventing the Prow job from connecting to the OpenShift cluster API server.

📋 Technical Details

Immediate Cause

The must-gather, gather-extra, and redhat-appstudio-gather steps failed because they could not resolve the DNS for the Kubernetes API server hostname api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com. This resulted in dial tcp: lookup ...: no such host errors.

Contributing Factors

The appstudio-e2e-tests/redhat-appstudio-e2e step was terminated prematurely with a terminated signal. This is likely a secondary effect of the underlying infrastructure instability, possibly due to timeouts resulting from the inability to communicate with the cluster. The supplemental context from the must-gather artifact confirms the pervasive DNS resolution failures.

Impact

The inability to resolve the cluster's API server hostname prevented the Prow job from collecting essential audit logs and diagnostic data. This fundamental infrastructure failure also led to the premature termination of the e2e test execution, blocking the successful completion of the job.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the DNS for the Kubernetes API server hostname api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com, preventing it from connecting and collecting audit logs.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 3
[must-gather      ] OUT 2026-01-29T13:20:57.689731705Z Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 18
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 24
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 36
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 70
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 80
error running backup collection: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 88
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The failure is due to a DNS resolution issue preventing the system from reaching the Kubernetes API server. This could be caused by network misconfiguration, DNS server problems, or issues with the cluster's internal networking.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 3
E0129 13:20:44.368907      31 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 9
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The oc adm must-gather command failed due to network connectivity issues, specifically i/o timeout errors when attempting to reach the OpenShift API server at api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443. This prevented the collection of necessary diagnostic data.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 20
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 24
Falling back to 'oc adm inspect clusteroperators.v1.config.openshift.io' to collect basic cluster information.
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 25
E0129 13:12:17.834318      54 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 31
E0129 13:12:47.849914      54 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 38
E0129 13:13:17.867377      54 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 41
error running backup collection: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 42
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The mage process was terminated prematurely by an external signal, likely due to a timeout or an issue with the underlying infrastructure executing the job.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 450
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-01-29T13:07:13Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 452
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 454
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-01-29T13:07:28Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 456
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-01-29T13:07:28Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The oc commands are failing because they cannot resolve the hostname of the Kubernetes API server. This indicates a network or DNS configuration problem preventing access to the cluster.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 20
E0129 13:21:59.308972      30 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 3786
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 7712
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-bvm48.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2016855047408717824

@rh-hemartin
Copy link
Contributor

rh-hemartin commented Jan 29, 2026

Change it in all clusters. You can use something like:

parallel cp components/vector-kubearchive-log-collector/production/stone-prod-p02/loki-helm-prod-values.yaml  ::: components/vector-kubearchive-log-collector/production/*

To copy the config to all clusters, then remove the file from the base and empty-base folders.

@rh-hemartin
Copy link
Contributor

/lgtm

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The Red Hat AppStudio end-to-end tests timed out, preventing the successful completion of the Prow job.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed due to a timeout. The process exceeded the allocated runtime of 2 hours and did not exit gracefully within the subsequent 15-second grace period.

Contributing Factors

Analysis of the cluster state reveals several Argo CD Applications and ApplicationSets in an "OutOfSync" or "Missing" state. This indicates potential configuration drift or instability within the deployed applications, which could lead to increased resource consumption or delays during the execution of end-to-end tests that rely on these services. Specific examples include tekton-kueue-webhook-rolebinding and squid applications being out of sync, and some ApplicationSets having applications in a "Missing" state.

Impact

The timeout of the end-to-end tests directly blocked the progression of the Prow job, preventing any subsequent steps from executing and leading to an overall job failure. This hinders the verification of the Red Hat AppStudio infrastructure deployment.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests for Red Hat AppStudio exceeded the allocated runtime, leading to a timeout failure. The specific reason for the extended execution time is not evident from the provided logs.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 671
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-01-29T17:19:45Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 672
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-01-29T17:20:00Z"}

Analysis powered by prow-failure-analysis | Build: 2016893289873018880

@olegbet olegbet changed the title ingester and querier pods are getting OOMKilled on stone-prod-p02 WIP: ingester and querier pods are getting OOMKilled on stone-prod-p02 Feb 2, 2026
@olegbet olegbet force-pushed the stone-prod-p02_ingester_querier_oomkilled branch from 1e8d373 to 8f5b5fd Compare February 2, 2026 15:30
@openshift-ci openshift-ci bot removed the lgtm label Feb 2, 2026
@openshift-ci
Copy link

openshift-ci bot commented Feb 2, 2026

New changes are detected. LGTM label has been removed.

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The Prow job appstudio-e2e-tests timed out during test execution, preventing the completion of the end-to-end test suite.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step in the Prow job exceeded its allocated 2-hour timeout. The process did not complete within the expected timeframe, leading to its termination.

Contributing Factors

While the direct cause is a timeout, the additional_context reveals potential contributing factors. Several Argo CD applications are in a degraded state (build-service-in-cluster-local), and numerous ApplicationSet resources are reporting OutOfSync and Missing health statuses. These conditions suggest that the cluster's state might not be optimal, potentially leading to extended execution times for the end-to-end tests as they interact with these resources. The tektonaddons.json also indicates that the Tekton Addon is not fully ready, with tkn-cli-serve deployment not ready, which might indirectly affect test execution.

Impact

The timeout prevented the successful execution and completion of the redhat-appstudio-e2e test suite. This means that the end-to-end validation of the AppStudio infrastructure could not be performed, blocking the successful completion of the Prow job and potentially delaying the integration of changes.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The test execution timed out. This could be due to the tests taking longer than expected to complete, or an issue with the test environment preventing them from finishing within the allocated time.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1233
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-02T17:34:19Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1235
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-02T17:34:34Z"}

Analysis powered by prow-failure-analysis | Build: 2018346197860749312

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The appstudio-e2e-tests job timed out due to underlying issues with Argo CD application synchronization and Tekton component readiness, preventing the E2E tests from completing within the allocated time.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step timed out after exceeding the 2-hour execution limit. This timeout indicates that the processes being run by the step, which are expected to complete within this timeframe, did not finish.

Contributing Factors

Several factors within the cluster likely contributed to the extended execution time:

  • Argo CD Synchronization Issues: Multiple Argo CD applications and ApplicationSets were found in an OutOfSync or Degraded state. Specifically, the build-service-in-cluster-local application is Degraded because its deployment exceeded its progress deadline, and other key ApplicationSets like application-api and build-service are OutOfSync. This suggests that the desired state defined in Git is not being correctly applied or synchronized within the cluster.
  • Tekton Component Readiness: The tektonconfigs.json artifact shows Error conditions for ComponentsReady and Ready types, indicating that TektonAddon components are not in a ready state and require reconciliation. Similarly, tektonaddons.json notes a false InstallerSetReady condition due to the tkn-cli-serve deployment not being ready. These issues with Tekton's core components could impede the execution of pipelines and related processes.

Impact

The prolonged synchronization and deployment times, caused by the Argo CD and Tekton issues, prevented the E2E tests from executing and completing within the Prow job's timeout limit. This blocked the successful validation of the infrastructure deployment.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests timed out because the underlying processes, likely related to Argo CD application synchronization and deployment, took too long to complete. This indicates potential instability or slowness in the test environment's resource provisioning or configuration.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/latest/build.log:1686
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-02T20:17:57Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/latest/build.log:1699
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-02T20:18:12Z"}

Analysis powered by prow-failure-analysis | Build: 2018384093615493120

@olegbet olegbet force-pushed the stone-prod-p02_ingester_querier_oomkilled branch from 4af92a4 to f597d37 Compare February 3, 2026 09:06
@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The Prow job failed due to infrastructure issues causing DNS resolution and network connectivity failures to the OpenShift API server, preventing the execution of e2e tests.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/gather-audit-logs, appstudio-e2e-tests/gather-extra, appstudio-e2e-tests/gather-must-gather, and appstudio-e2e-tests/redhat-appstudio-gather steps failed because the must-gather tool and oc client could not resolve the DNS name of the OpenShift API server (api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com). This resulted in "no such host" errors and network timeouts, indicating a breakdown in network communication with the cluster.

Contributing Factors

The redhat-appstudio-e2e test execution step was terminated with a "terminated" signal. While the exact cause of termination is not definitively stated, it is highly probable that this was a secondary effect of the underlying network and DNS resolution failures. The inability to communicate with the cluster would prevent tests from running and could lead to the test runner or entrypoint being interrupted.

Impact

The DNS resolution and network connectivity failures prevented the successful collection of diagnostic logs and cluster information by the must-gather and oc commands in multiple steps. This ultimately blocked the execution of the main e2e tests, rendering the job unable to complete its intended validation and testing procedures.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool and subsequent oc adm inspect command failed due to a DNS resolution error when trying to connect to the OpenShift API server. This indicates a network or infrastructure problem preventing proper communication with the cluster.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 4
[must-gather      ] OUT 2026-02-03T10:42:14.979853172Z Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 16
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 21
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 28
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 39
E0203 10:42:45.020120      51 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 42
E0203 10:42:45.044372      51 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 55
error running backup collection: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 68
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 73
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 77
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution error preventing the e2e test environment from connecting to the Kubernetes API server. This indicates a problem with the network configuration or the DNS service within the test environment.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 4
E0203 10:42:07.515460      29 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 16
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is caused by network connectivity issues, specifically timeouts and DNS resolution failures, when the must-gather tool attempts to communicate with the OpenShift API server.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/appstudio-e2e-tests-gather-must-gather.log line 13
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/appstudio-e2e-tests-gather-must-gather.log line 20
E0203 10:41:53.436613      55 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/appstudio-e2e-tests-gather-must-gather.log line 22
E0203 10:41:53.447263      55 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/appstudio-e2e-tests-gather-must-gather.log line 37
error running backup collection: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/appstudio-e2e-tests-gather-must-gather.log line 39
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The CI job was terminated unexpectedly, likely due to an external signal or resource exhaustion in the execution environment, causing the make command to be interrupted.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 611
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-03T10:36:39Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 613
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 616
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T10:36:54Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 617
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-03T10:36:54Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The oc command failed to resolve the Kubernetes API server's hostname, indicating a network or DNS issue preventing communication with the cluster.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/console.log line 148
E0203 10:42:52.079682      34 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/console.log line 2963
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/console.log line 5838
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-txlc2.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018612157653979136

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The pipeline failed due to persistent DNS resolution errors preventing essential diagnostic and test steps from connecting to the OpenShift API server, indicating a critical infrastructure network configuration issue.

📋 Technical Details

Immediate Cause

Multiple infrastructure steps (gather-audit-logs, gather-extra, gather-must-gather, redhat-appstudio-gather) failed due to the inability to resolve the hostname api.konflux-4-17-us-west-2-hxp6w.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com using the DNS server 172.30.0.10. This resulted in repeated "no such host" errors, preventing these tools from interacting with the OpenShift API.

Contributing Factors

The redhat-appstudio-e2e test step was terminated due to an external interrupt and a subsequent grace period violation. While this is a distinct failure event, it is highly probable that the underlying network instability, which prevented the diagnostic tools from functioning, also contributed to or directly caused the termination of the test execution process. The consistent DNS resolution failures across multiple steps suggest a systemic environmental problem rather than an isolated incident.

Impact

The inability to resolve the cluster API endpoint prevented the successful execution of critical diagnostic data collection steps, such as must-gather and general cluster state gathering. Consequently, these failures would have blocked any subsequent testing or validation that relies on accurate cluster information. The termination of the main e2e test execution step directly halted the pipeline's progress.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution error. The must-gather tool cannot resolve the hostname of the OpenShift API server, indicating a network or DNS configuration problem within the environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 4
[must-gather      ] OUT 2026-02-03T12:37:17.967937987Z Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 16
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 22
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 29
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 54
error running backup collection: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 74
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The gather-extra step failed because it could not resolve the DNS for the Kubernetes API server. This suggests a network configuration issue or a problem with the DNS service within the cluster environment.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/log.txt line 3
E0203 12:35:49.326306      29 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/log.txt line 9
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is caused by network connectivity issues or DNS resolution problems preventing the must-gather tool from reaching the OpenShift API server, leading to repeated timeouts and host lookup failures.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 19
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 25
E0203 12:34:46.527925      41 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 31
E0203 12:35:16.528640      41 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 48
error running backup collection: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 50
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The test execution was terminated due to an external interrupt signal received by the entrypoint process, which exceeded the grace period for termination. This indicates a potential issue with the testing environment or the orchestrator managing the test process.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1014
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-03T12:29:36Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1016
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1020
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T12:29:51Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt:1022
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-03T12:29:51Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The oc command failed to connect to the Kubernetes API server due to a DNS resolution error ("no such host"). This indicates a problem with the network configuration or DNS server accessible by the pod running the oc commands.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/gather.log line 17
E0203 12:37:25.000556      46 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/gather.log line 50
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/gather.log line 4621
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/gather.log line 4630
Falling back to 'oc adm inspect clusteroperators.v1.config.openshift.io' to collect basic cluster information.
E0203 12:37:26.378898    1131 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/gather.log line 4683
error running backup collection: Get "https://api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-hxp6w.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018634739480530944

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

Pipeline failed due to a timeout in the e2e tests, likely caused by widespread cluster instability and resource reconciliation issues indicated by numerous Argo CD ApplicationSet failures.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step timed out after exceeding the 2-hour execution limit. The logs indicate that the process did not complete within the allowed timeframe, and a grace period also expired.

Contributing Factors

The additional_context reveals a systemic problem within the cluster. A significant number of Argo CD ApplicationSets are in an "OutOfSync" or "Missing" state across various components (e.g., application-api, crossplane-control-plane, enterprise-contract, integration, internal-services, knative-eventing, kubearchive, kueue, kyverno, monitoring-registry, pipeline-service, release, squid, tracing-workload-otel-collector, trust-manager). This widespread desynchronization suggests fundamental issues with resource deployment, reconciliation, or cluster health, which would naturally lead to prolonged test execution as components fail to become available or stable. Additionally, the tektonconfigs.json shows a Ready status of False with a message indicating components are not in a ready state, specifically referencing TektonAddon: reconcile again and proceed, further pointing to underlying infrastructure and configuration problems.

Impact

The timeout prevented the completion of the end-to-end tests for this Prow job. The underlying cluster instability, indicated by the numerous failing ApplicationSets and Tekton configurations, suggests that even if the tests had not timed out, they might have failed due to the non-operational state of critical components. This failure blocks the validation of infrastructure deployments and prevents merging of changes that rely on these tests.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The e2e tests timed out because a process within the test execution did not complete within the allocated time, likely due to the extended duration of the test suite or underlying infrastructure issues.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1275
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-03T14:42:17Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1277
build-service-in-cluster-local                      Synced   Degraded
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1279
Waiting 10 seconds for application sync
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1281
build-service-in-cluster-local                      Synced   Degraded
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1283
Waiting 10 seconds for application sync
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log:1285
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T14:42:32Z"}

Analysis powered by prow-failure-analysis | Build: 2018663853700681728

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The e2e tests timed out because ArgoCD applications failed to synchronize, likely due to degraded deployment health and unsynced ApplicationSets.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed due to a timeout. This timeout occurred because the underlying ArgoCD applications and ApplicationSets did not reach a synchronized state within the allocated execution window.

Contributing Factors

Analysis of the cluster state reveals that several ArgoCD ApplicationSets ('application-api', 'build-service', 'crossplane-control-plane') are in an OutOfSync state. Furthermore, the 'build-service-controller-manager' Deployment is reported as Degraded, indicating a critical failure in a core service responsible for managing deployments. Additionally, Tekton Addons and configurations show that console resources and the tkn-cli-serve deployment are not ready, which might impede deployment processes.

Impact

The failure of ArgoCD to synchronize its applications and ApplicationSets directly prevented the successful deployment and configuration of the necessary resources for the e2e tests to execute. This resulted in the test step exceeding its timeout limit and ultimately causing the job to fail.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The e2e tests failed due to a timeout. This was likely caused by the ArgoCD applications failing to synchronize within the allowed execution time, leading to the overall step exceeding its time limit.


Analysis powered by prow-failure-analysis | Build: 2018703962059837440

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The Appstudio E2E tests failed due to persistent DNS resolution errors preventing the job from connecting to the OpenShift API server, leading to the failure of diagnostic collection steps and eventual job termination.

📋 Technical Details

Immediate Cause

The immediate cause of the failure was the inability of various Prow job steps (gather-audit-logs, gather-extra, gather-must-gather, redhat-appstudio-gather) to resolve the hostname api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com via DNS. This resulted in dial tcp: lookup ...: no such host errors, preventing these tools from interacting with the OpenShift API server.

Contributing Factors

The appstudio-e2e-tests/redhat-appstudio-e2e step was terminated by the infrastructure with a terminated signal. While the direct cause of this termination signal is not explicitly detailed, it is highly likely that the underlying network and DNS instability, which prevented other steps from functioning, created an environment that led to the job's premature shutdown. The inability to connect to the API server also caused i/o timeout errors in some gather-must-gather logs, further indicating network issues.

Impact

The DNS resolution failures prevented essential diagnostic information from being collected by the must-gather and other gathering tools. This lack of data hinders proper root cause analysis. More critically, the inability to communicate with the cluster's API server means that the core functionality being tested by the E2E tests could not even begin, rendering the test run ineffective and blocking the overall CI pipeline.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution error preventing the must-gather tool from reaching the OpenShift API server. The tool is unable to look up the hostname api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on the DNS server 172.30.0.10:53.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 4
[must-gather      ] OUT 2026-02-03T21:06:40.438595105Z Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 13
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 19
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 27
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 50
error running backup collection: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 67
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 73
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/must-gather.log line 80
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The step failed due to a DNS resolution error when attempting to connect to the Kubernetes API server. The hostname api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com could not be found, indicating a network or DNS configuration problem within the cluster environment.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 3
E0203 21:06:32.892561      28 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 10
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is caused by network connectivity issues, specifically timeouts when trying to connect to the OpenShift API server, and DNS resolution failures for the API server's hostname.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 14
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 23
E0203 21:05:56.172400      52 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 26
E0203 21:05:56.197120      52 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 40
error running backup collection: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 42
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The CI job was terminated prematurely by the infrastructure, likely due to a timeout or an external signal, preventing the entrypoint process from shutting down gracefully.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/artifacts/e2e.log line 70
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-03T21:00:45Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/artifacts/e2e.log line 71
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T21:01:00Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/artifacts/e2e.log line 72
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-03T21:01:00Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The failure is caused by a network issue, specifically a DNS resolution error preventing the oc client from connecting to the Kubernetes API server. The hostname api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com could not be found.

Logs:

artifacts/redhat-appstudio-gather/build-log.txt line 25
E0203 21:06:47.724292      46 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/redhat-appstudio-gather/build-log.txt line 5000
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/redhat-appstudio-gather/build-log.txt line 5015
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-b7jsd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018789967114801152

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Timeout

The end-to-end tests for AppStudio failed due to exceeding the allocated execution time, preventing the successful validation of the deployment.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step timed out after 2 hours, indicating that the end-to-end tests did not complete within the allowed duration. The process also failed to terminate gracefully within the subsequent 15-second grace period.

Contributing Factors

Several environmental issues were observed, including multiple ArgoCD ApplicationSets being in an 'OutOfSync' state and the 'build-service' application showing a 'Degraded' health status. Additionally, there are indications of TektonAddon and TektonConfig not being fully ready, which could impact test execution or the environment the tests are running against. The exact reason for the prolonged test execution is not definitively identified but could stem from these underlying cluster configuration or resource issues leading to slow test progress or infinite loops.

Impact

The timeout failure prevented the completion of the end-to-end test suite. This means the current deployment has not been validated, and any potential issues introduced by the pull request remain undetected, blocking the merge of the change.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The end-to-end tests failed due to exceeding the allocated timeout limit. This could be due to a variety of reasons, including test flakiness, environment issues, or the tests themselves taking longer than expected.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 585
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:169","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2026-02-03T23:12:35Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 589
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-03T23:12:50Z"}

Analysis powered by prow-failure-analysis | Build: 2018791898897977344

@olegbet olegbet force-pushed the stone-prod-p02_ingester_querier_oomkilled branch from b737389 to 1010bec Compare February 4, 2026 08:23
@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The pipeline failed due to a DNS resolution failure preventing essential infrastructure gathering steps from connecting to the OpenShift API server.

📋 Technical Details

Immediate Cause

Multiple infrastructure gathering steps, including gather-audit-logs, gather-extra, gather-must-gather, and redhat-appstudio-gather, failed because they could not resolve the DNS name (api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com) of the OpenShift API server. This resulted in repeated "dial tcp: lookup ... no such host" errors.

Contributing Factors

The redhat-appstudio-e2e step subsequently timed out, indicated by an "Entrypoint received interrupt: terminated" message and a "Process did not exit before 15s grace period" error. This timeout is likely a consequence of the preceding infrastructure gathering failures, as the test execution environment may have been unable to establish necessary connections or collect prerequisite data due to the DNS resolution issue.

Impact

The DNS resolution failure in critical infrastructure gathering steps prevented the pipeline from collecting necessary diagnostic information and likely contributed to the eventual timeout of the main e2e test execution. This blocked the successful completion of the end-to-end test suite for the AppStudio infrastructure.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the DNS name of the OpenShift API server, indicating a network or DNS configuration issue within the cluster or the environment from which the tool is running.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 4
[must-gather      ] OUT 2026-02-04T08:29:27.545712505Z Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 18
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 25
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 35
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 68
error running backup collection: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 81
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 88
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/gather-audit-logs.log line 98
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The step failed because it could not resolve the DNS name of the cluster API server. This indicates a problem with the network or DNS configuration within the CI environment or the cluster itself.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 3
E0204 08:29:19.286840      29 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 15
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is caused by network connectivity issues, specifically I/O timeouts and DNS resolution failures when attempting to connect to the OpenShift API server.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 19
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 25
E0204 08:29:06.020262      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 26
E0204 08:29:06.034632      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/run.log line 37
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The mage process was terminated due to an external interrupt or timeout, preventing the e2e tests from completing successfully.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/e2e.log line 789
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-04T08:23:59Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/e2e.log line 790
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/e2e.log line 791
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-04T08:24:14Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/e2e.log line 792
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-04T08:24:14Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The oc command failed to connect to the Kubernetes API server due to a DNS lookup failure for the API server address. This indicates a network or DNS configuration issue within the cluster environment that prevents the CI job from reaching the necessary services.

Logs:

artifacts/appstudio-e2e-tests-redhat-appstudio-gather/redhat-appstudio-gather.log
E0204 08:29:34.782970     149 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests-redhat-appstudio-gather/redhat-appstudio-gather.log
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests-redhat-appstudio-gather/redhat-appstudio-gather.log
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-llnkd.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018959116608737280

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The Prow job failed because multiple infrastructure steps could not resolve the Kubernetes API server's DNS name, preventing connectivity to the cluster.

📋 Technical Details

Immediate Cause

Several infrastructure-related steps, including gather-audit-logs, gather-extra, gather-must-gather, and redhat-appstudio-gather, failed due to persistent DNS resolution errors when attempting to connect to the Kubernetes API server (api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com). The logs consistently show a "dial tcp: lookup ... on 172.30.0.10:53: no such host" error.

Contributing Factors

The redhat-appstudio-e2e step also failed, but with a configuration category. Its root cause is a missing patch file (base/ingester-pvc-retention-patch.yaml). However, the error message indicates that this configuration issue occurred during manifest generation for the vector-kubearchive-log-collector component, which likely requires cluster communication. Given the widespread DNS resolution failures preceding this step, it's probable that this configuration error is a downstream effect of the network connectivity issues rather than an independent failure.

Impact

The inability to resolve the Kubernetes API server's DNS name prevented the E2E test job from performing essential data collection and cluster interaction tasks. This fundamental connectivity issue blocked the successful execution of the gather steps and likely contributed to the subsequent failure of the main E2E test execution, rendering the job unable to complete its intended purpose of validating the AppStudio deployment.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the DNS name of the Kubernetes API server, preventing it from connecting to the cluster to collect logs. This is likely due to a network configuration issue or a problem with the DNS service within the cluster environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 4
[must-gather      ] OUT 2026-02-04T09:18:37.911038766Z Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 13
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 17
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 26
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 46
error running backup collection: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 62
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The failure is caused by a DNS resolution error, preventing the gather-extra step from connecting to the API server to collect artifacts. The specified hostname could not be found by the DNS server.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 4
E0204 09:18:30.058688      27 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 12
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The must-gather command failed because of network connectivity issues, manifesting as I/O timeouts and DNS resolution failures when attempting to connect to the OpenShift API server.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 20
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 25
E0204 09:17:52.002799      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 27
E0204 09:17:52.014570      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 36
E0204 09:18:22.027298      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 41
error running backup collection: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 42
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: configuration
Root Cause: The failure is caused by kustomize failing to build the manifest for the vector-kubearchive-log-collector component due to a missing patch file (base/ingester-pvc-retention-patch.yaml) in its source path.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/build-log.txt line 857
vector-kubearchive-log-collector   Unknown   Healthy
vector-kubearchive-log-collector failed with:
[{"lastTransitionTime":"2026-02-04T08:50:31Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = 'kustomize build \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development --enable-helm ... --helm-api-versions v1/ServiceAccount --helm-api-versions whereabouts.cni.cncf.io/v1alpha1 --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/IPPool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/NodeSlicePool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/OverlappingRangeIPReservation' failed exit status 1: Error: trouble configuring builtin PatchTransformer with config: '\npath: base/ingester-pvc-retention-patch.yaml\ntarget:\n  kind: StatefulSet\n  name: loki-ingester\n': failed to get the patch file from path(base/ingester-pvc-retention-patch.yaml): evalsymlink failure on '\u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development/base/ingester-pvc-retention-patch.yaml' : lstat \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development/base: no such file or directory","type":"ComparisonError"}]

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The CI job is unable to resolve the DNS name of the Kubernetes API server. This suggests a network configuration issue or an issue with the cluster's DNS services, preventing the 'oc' client from connecting to the cluster.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/ci/build-log.txt line 77
E0204 09:18:45.008196      28 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/ci/build-log.txt line 241
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/ci/build-log.txt line 1129
Error running running must-gather collection: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/ci/build-log.txt line 1129
error running backup collection: Get "https://api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-dnm8s.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018963738836602880

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The pipeline failed due to a persistent DNS resolution failure preventing access to the OpenShift API server, which blocked the execution of end-to-end tests.

📋 Technical Details

Immediate Cause

Multiple steps within the appstudio-e2e-tests job, including gather-audit-logs, gather-extra, gather-must-gather, and redhat-appstudio-gather, failed because the underlying CI environment could not resolve the DNS hostname (api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com) for the OpenShift API server. This resulted in repeated "dial tcp: lookup ... on 172.30.0.10:53: no such host" errors.

Contributing Factors

The redhat-appstudio-e2e step was subsequently terminated with a timeout. This is likely a consequence of the network and DNS issues preventing the successful completion of prerequisite setup and data collection steps. The inability to reach the API server would stall or indefinitely delay the test execution, leading to the termination signal.

Impact

The DNS resolution failure prevented the collection of critical diagnostic information via must-gather and related tools. This fundamentally blocked the execution of the end-to-end tests, rendering the pipeline unable to validate the deployed infrastructure and components.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool failed to resolve the hostname of the OpenShift API server due to a DNS lookup failure. This indicates a network or DNS configuration problem within the test environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 4
[must-gather      ] OUT 2026-02-04T09:51:10.326902773Z Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 18
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 23
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 36
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 61
error running backup collection: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/log.txt line 93
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The CI environment failed to resolve the DNS name for the Kubernetes API server, indicating a potential networking or DNS configuration issue within the cluster or the CI runner's environment.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 4
E0204 09:51:04.976285      27 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 15
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The oc adm must-gather command failed due to network connectivity issues, specifically "i/o timeout" and "no such host" errors when attempting to reach the cluster API. This prevented the collection of essential diagnostic data.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 15
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 25
E0204 09:50:51.131279      55 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/build-log.txt line 39
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: timeout
Root Cause: The make ci/test/e2e command was terminated due to an external interrupt, likely a timeout, after a prolonged execution of setup tasks including numerous Go module downloads and environment configuration.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 469
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-04T09:46:20Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 472
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 475
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-04T09:46:35Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/step.log line 478
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-04T09:46:35Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The oc command is unable to resolve the DNS for the Kubernetes API server endpoint, indicating a network or DNS configuration problem. This prevents the step from gathering necessary cluster information.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 20
E0204 09:51:18.250633     177 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 2781
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 4282
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/log.txt line 5904
error running backup collection: Get "https://api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-s9v7m.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018976146661576704

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The Prow job failed due to an infrastructure issue where the test environment could not resolve the DNS hostname for the cluster API endpoint, preventing essential diagnostic data collection and subsequent test execution.

📋 Technical Details

Immediate Cause

Multiple infrastructure-related steps (gather-audit-logs, gather-extra, gather-must-gather, redhat-appstudio-gather) failed because they were unable to resolve the DNS hostname api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com. This is indicated by repeated dial tcp: lookup ... on 172.30.0.10:53: no such host errors.

Contributing Factors

The inability to resolve the cluster API endpoint's DNS name suggests a network configuration problem within the test environment, specifically related to DNS resolution services or network accessibility to the DNS server. Some logs also show i/o timeout errors when attempting to connect to IP addresses associated with the API, which could be a secondary effect of the DNS issue or a separate network connectivity problem.

Impact

The failure to resolve the cluster API endpoint prevented the must-gather and other data collection tools from obtaining critical cluster information, such as cluster version and operators. This diagnostic data is essential for debugging the end-to-end tests. The redhat-appstudio-e2e test step itself was terminated, likely due to the overall failure of preceding infrastructure steps or a timeout resulting from the inability to establish a proper connection to the cluster.

🔍 Evidence

appstudio-e2e-tests/gather-audit-logs

Category: infrastructure
Root Cause: The must-gather tool is unable to resolve the DNS hostname for the cluster API endpoint, likely due to a DNS or network configuration issue within the environment.

Logs:

artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 4
[must-gather      ] OUT 2026-02-04T10:21:16.917924419Z Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift/imagestreams/must-gather": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 23
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 27
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 31
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 52
error getting cluster version: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusterversions/version": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 56
error getting cluster operators: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/apis/config.openshift.io/v1/clusteroperators": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-audit-logs/build-log.txt line 64
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-extra

Category: infrastructure
Root Cause: The step failed because it could not resolve the DNS name of the Kubernetes API server. This indicates a problem with the network or DNS configuration in the environment where the gather-extra step is executing.

Logs:

artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 4
E0204 10:21:08.368064      29 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-extra/gather-extra.log line 10
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

appstudio-e2e-tests/gather-must-gather

Category: infrastructure
Root Cause: The failure is due to network connectivity issues or an unresponsive Kubernetes API server, preventing the must-gather tool from collecting cluster information. This is evidenced by repeated "i/o timeout" and "no such host" errors when attempting to reach the cluster API.

Logs:

artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 13
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 19
E0204 10:20:20.797398      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 21
E0204 10:20:50.798681      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 23
E0204 10:20:50.813312      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 35
error running backup collection: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/gather-must-gather/log.txt line 36
error: creating temp namespace: Post "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp [REDACTED: Public IP (ipv4)]: i/o timeout

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The e2e tests were terminated due to an external signal or condition, likely a timeout or a signal from the CI/CD system managing the job execution, causing the make command to stop.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 301
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:173","func":"sigs.k8s.io/prow/pkg/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2026-02-04T10:15:17Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 304
make: *** [Makefile:25: ci/test/e2e] Terminated
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 306
{"component":"entrypoint","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:267","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","severity":"error","time":"2026-02-04T10:15:32Z"}
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 309
{"component":"entrypoint","error":"os: process already finished","file":"sigs.k8s.io/prow/pkg/entrypoint/run.go:269","func":"sigs.k8s.io/prow/pkg/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2026-02-04T10:15:32Z"}

appstudio-e2e-tests/redhat-appstudio-gather

Category: infrastructure
Root Cause: The job failed because it could not resolve the DNS name of the Kubernetes API server, preventing it from connecting to the cluster. This suggests a network or DNS configuration issue within the test environment.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/step.log line 53
E0204 10:21:53.984429      53 memcache.go:265] couldn't get current server API group list: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=5s": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/step.log line 1162
Unable to connect to the server: dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/step.log line 1735
Error running must-gather collection:
    creating temp namespace: Post "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api/v1/namespaces": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host
artifacts/appstudio-e2e-tests/redhat-appstudio-gather/logs/step.log line 2590
error running backup collection: Get "https://api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com:6443/api?timeout=32s": dial tcp: lookup api.konflux-4-17-us-west-2-5qf9l.konflux-qe.devcluster.openshift.com on 172.30.0.10:53: no such host

Analysis powered by prow-failure-analysis | Build: 2018984464213872640

@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The E2E test pipeline failed to bootstrap the cluster due to a configuration error during the kustomize build of the vector-kubearchive-log-collector Argo CD application, specifically a missing persistentVolumeReclaimPolicy.

📋 Technical Details

Immediate Cause

The appstudio-e2e-tests/redhat-appstudio-e2e step failed because the kustomize build for the vector-kubearchive-log-collector Argo CD application encountered an error. The error message indicates a ComparisonError during manifest generation, stating that the field persistentVolumeReclaimPolicy is missing a required value, which prevents the application of the configuration.

Contributing Factors

The additional_context reveals that the vector-kubearchive-log-collector-in-cluster-local ApplicationSet in the openshift-gitops namespace has a ComparisonError during manifest generation, directly correlating with the step failure. Additionally, other ApplicationSets like "pipeline-service" and "trust-manager" are in an OutOfSync state, and "pipeline-service" is Missing its associated Application, suggesting a broader Argo CD synchronization issue that might be contributing to the unstable environment.

Impact

This infrastructure failure directly blocked the E2E test execution. The inability to successfully build and apply the necessary Kubernetes manifests for the vector-kubearchive-log-collector prevented the cluster from being properly bootstrapped. Consequently, the entire appstudio-e2e-tests job could not proceed, leading to its failure and preventing further validation of the Red Hat App Studio infrastructure.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The failure is caused by an error during the kustomize build process for the vector-kubearchive-log-collector Argo CD application, specifically an issue with a missing persistentVolumeReclaimPolicy field, which prevents the application of the configuration and thus the cluster bootstrapping.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 579
vector-kubearchive-log-collector-in-cluster-local failed with:
[{"lastTransitionTime":"2026-02-04T10:33:33Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = 'kustomize build \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development --enable-helm --helm-kube-version 1.30 --helm-api-versions admissionregistration.k8s.io/v1 --helm-api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicyBinding --helm-api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1beta1 --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicyBinding --helm-api-versions apiextensions.crossplane.io/v1 --helm-api-versions apiextensions.crossplane.io/v1/CompositeResourceDefinition --helm-api-versions apiextensions.crossplane.io/v1/CompositionRevision --helm-api-versions apiextensions.crossplane.io/v2 --helm-api-versions apiextensions.crossplane.io/v2/CompositeResourceDefinition --helm-api-versions apiextensions.k8s.io/v1 --helm-api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --helm-api-versions apiregistration.k8s.io/v1 --helm-api-versions apiregistration.k8s.io/v1/APIService --helm-api-versions apiserver.openshift.io/v1 --helm-api-versions apiserver.openshift.io/v1/APIRequestCount --helm-api-versions apps.openshift.io/v1 --helm-api-versions apps.openshift.io/v1/DeploymentConfig --helm-api-versions apps/v1 --helm-api-versions apps/v1/ControllerRevision --helm-api-versions apps/v1/DaemonSet --helm-api-versions apps/v1/Deployment --helm-api-versions apps/v1/ReplicaSet --helm-api-versions apps/v1/StatefulSet --helm-api-versions appstudio.redhat.com/v1alpha1 --helm-api-versions appstudio.redhat.com/v1alpha1/Application --helm-api-versions appstudio.redhat.com/v1alpha1/Component --helm-api-versions appstudio.redhat.com/v1alpha1/ComponentDetectionQuery --helm-api-versions appstudio.redhat.com/v1alpha1/DependencyUpdateCheck --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTarget --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClaim --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClass --helm-api-versions appstudio.redhat.com/v1alpha1/EnterpriseContractPolicy --helm-api-versions appstudio.redhat.com/v1alpha1/Environment --helm-api-versions appstudio.redhat.com/v1alpha1/ImageRepository --helm-api-versions appstudio.redhat.com/v1alpha1/InternalRequest --helm-api-versions appstudio.redhat.com/v1alpha1/InternalServicesConfig --helm-api-versions appstudio.redhat.com/v1alpha1/PromotionRun --helm-api-versions appstudio.redhat.com/v1alpha1/Release --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlan --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlanAdmission --helm-api-versions appstudio.redhat.com/v1alpha1/ReleaseServiceConfig --helm-api-versions appstudio.redhat.com/v1alpha1/Snapshot --helm-api-versions appstudio.redhat.com/v1alpha1/SnapshotEnvironmentBinding --helm-api-versions argoproj.io/v1alpha1 --helm-api-versions argoproj.io/v1alpha1/AnalysisRun --helm-api-versions argoproj.io/v1alpha1/AnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/AppProject --helm-api-versions argoproj.io/v1alpha1/Application --helm-api-versions argoproj.io/v1alpha1/ApplicationSet --helm-api-versions argoproj.io/v1alpha1/ArgoCD --helm-api-versions argoproj.io/v1alpha1/ClusterAnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/Experiment --helm-api-versions argoproj.io/v1alpha1/NotificationsConfiguration --helm-api-versions argoproj.io/v1alpha1/Rollout --helm-api-versions argoproj.io/v1alpha1/RolloutManager --helm-api-versions argoproj.io/v1beta1 --helm-api-versions argoproj.io/v1beta1/ArgoCD --helm-api-versions authorization.openshift.io/v1 --helm-api-versions authorization.openshift.io/v1/RoleBindingRestriction --helm-api-versions autoscaling.openshift.io/v1 --helm-api-versions autoscaling.openshift.io/v1/ClusterAutoscaler --helm-api-versions autoscaling.openshift.io/v1beta1 --helm-api-versions autoscaling.openshift.io/v1beta1/MachineAutoscaler --helm-api-versions autoscaling/v1 --helm-api-versions autoscaling/v1/HorizontalPodAutoscaler --helm-api-versions autoscaling/v2 --helm-api-versions autoscaling/v2/HorizontalPodAutoscaler --helm-api-versions batch/v1 --helm-api-versions batch/v1/CronJob --helm-api-versions batch/v1/Job --helm-api-versions build.openshift.io/v1 --helm-api-versions build.openshift.io/v1/Build --helm-api-versions build.openshift.io/v1/BuildConfig --helm-api-versions certificates.k8s.io/v1 --helm-api-versions certificates.k8s.io/v1/CertificateSigningRequest --helm-api-versions cloud.network.openshift.io/v1 --helm-api-versions cloud.network.openshift.io/v1/CloudPrivateIPConfig --helm-api-versions cloudcredential.openshift.io/v1 --helm-api-versions cloudcredential.openshift.io/v1/CredentialsRequest --helm-api-versions config.openshift.io/v1 --helm-api-versions config.openshift.io/v1/APIServer --helm-api-versions config.openshift.io/v1/Authentication --helm-api-versions config.openshift.io/v1/Build --helm-api-versions config.openshift.io/v1/ClusterOperator --helm-api-versions config.openshift.io/v1/ClusterVersion --helm-api-versions config.openshift.io/v1/Console --helm-api-versions config.openshift.io/v1/DNS --helm-api-versions config.openshift.io/v1/FeatureGate --helm-api-versions config.openshift.io/v1/Image --helm-api-versions config.openshift.io/v1/ImageContentPolicy --helm-api-versions config.openshift.io/v1/ImageDigestMirrorSet --helm-api-versions config.openshift.io/v1/ImageTagMirrorSet --helm-api-versions config.openshift.io/v1/Infrastructure --helm-api-versions config.openshift.io/v1/Ingress --helm-api-versions config.openshift.io/v1/Network --helm-api-versions config.openshift.io/v1/Node --helm-api-versions config.openshift.io/v1/OAuth --helm-api-versions config.openshift.io/v1/OperatorHub --helm-api-versions config.openshift.io/v1/Project --helm-api-versions config.openshift.io/v1/Proxy --helm-api-versions config.openshift.io/v1/Scheduler --helm-api-versions console.openshift.io/v1 --helm-api-versions console.openshift.io/v1/ConsoleCLIDownload --helm-api-versions console.openshift.io/v1/ConsoleExternalLogLink --helm-api-versions console.openshift.io/v1/ConsoleLink --helm-api-versions console.openshift.io/v1/ConsoleNotification --helm-api-versions console.openshift.io/v1/ConsolePlugin --helm-api-versions console.openshift.io/v1/ConsoleQuickStart --helm-api-versions console.openshift.io/v1/ConsoleSample --helm-api-versions console.openshift.io/v1/ConsoleYAMLSample --helm-api-versions console.openshift.io/v1alpha1 --helm-api-versions console.openshift.io/v1alpha1/ConsolePlugin --helm-api-versions controlplane.operator.openshift.io/v1alpha1 --helm-api-versions controlplane.operator.openshift.io/v1alpha1/PodNetworkConnectivityCheck --helm-api-versions coordination.k8s.io/v1 --helm-api-versions coordination.k8s.io/v1/Lease --helm-api-versions discovery.k8s.io/v1 --helm-api-versions discovery.k8s.io/v1/EndpointSlice --helm-api-versions eventing.knative.dev/v1 --helm-api-versions eventing.knative.dev/v1/Broker --helm-api-versions eventing.knative.dev/v1/Trigger --helm-api-versions eventing.knative.dev/v1alpha1 --helm-api-versions eventing.knative.dev/v1alpha1/EventPolicy --helm-api-versions eventing.knative.dev/v1beta1 --helm-api-versions eventing.knative.dev/v1beta1/EventType --helm-api-versions eventing.knative.dev/v1beta2 --helm-api-versions eventing.knative.dev/v1beta2/EventType --helm-api-versions eventing.knative.dev/v1beta3 --helm-api-versions eventing.knative.dev/v1beta3/EventType --helm-api-versions events.k8s.io/v1 --helm-api-versions events.k8s.io/v1/Event --helm-api-versions flowcontrol.apiserver.k8s.io/v1 --helm-api-versions flowcontrol.apiserver.k8s.io/v1/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3 --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration --helm-api-versions flows.knative.dev/v1 --helm-api-versions flows.knative.dev/v1/Parallel --helm-api-versions flows.knative.dev/v1/Sequence --helm-api-versions helm.openshift.io/v1beta1 --helm-api-versions helm.openshift.io/v1beta1/HelmChartRepository --helm-api-versions helm.openshift.io/v1beta1/ProjectHelmChartRepository --helm-api-versions image.openshift.io/v1 --helm-api-versions image.openshift.io/v1/Image --helm-api-versions image.openshift.io/v1/ImageStream --helm-api-versions imageregistry.operator.openshift.io/v1 --helm-api-versions imageregistry.operator.openshift.io/v1/Config --helm-api-versions imageregistry.operator.openshift.io/v1/ImagePruner --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5 --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3RemediationTemplate --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1 --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3RemediationTemplate --helm-api-versions ingress.operator.openshift.io/v1 --helm-api-versions ingress.operator.openshift.io/v1/DNSRecord --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1 --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddressClaim --helm-api-versions ipam.cluster.x-k8s.io/v1beta1 --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddressClaim --helm-api-versions k8s.cni.cncf.io/v1 --helm-api-versions k8s.cni.cncf.io/v1/NetworkAttachmentDefinition --helm-api-versions k8s.ovn.org/v1 --helm-api-versions k8s.ovn.org/v1/AdminPolicyBasedExternalRoute --helm-api-versions k8s.ovn.org/v1/EgressFirewall --helm-api-versions k8s.ovn.org/v1/EgressIP --helm-api-versions k8s.ovn.org/v1/EgressQoS --helm-api-versions k8s.ovn.org/v1/EgressService --helm-api-versions kubearchive.org/v1 --helm-api-versions kubearchive.org/v1/ClusterKubeArchiveConfig --helm-api-versions kubearchive.org/v1/ClusterVacuumConfig --helm-api-versions kubearchive.org/v1/KubeArchiveConfig --helm-api-versions kubearchive.org/v1/NamespaceVacuumConfig --helm-api-versions kubearchive.org/v1/SinkFilter --helm-api-versions machine.openshift.io/v1 --helm-api-versions machine.openshift.io/v1/ControlPlaneMachineSet --helm-api-versions machine.openshift.io/v1beta1 --helm-api-versions machine.openshift.io/v1beta1/Machine --helm-api-versions machine.openshift.io/v1beta1/MachineHealthCheck --helm-api-versions machine.openshift.io/v1beta1/MachineSet --helm-api-versions machineconfiguration.openshift.io/v1 --helm-api-versions machineconfiguration.openshift.io/v1/ContainerRuntimeConfig --helm-api-versions machineconfiguration.openshift.io/v1/ControllerConfig --helm-api-versions machineconfiguration.openshift.io/v1/KubeletConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfigPool --helm-api-versions messaging.knative.dev/v1 --helm-api-versions messaging.knative.dev/v1/Channel --helm-api-versions messaging.knative.dev/v1/InMemoryChannel --helm-api-versions messaging.knative.dev/v1/Subscription --helm-api-versions metal3.io/v1alpha1 --helm-api-versions metal3.io/v1alpha1/BMCEventSubscription --helm-api-versions metal3.io/v1alpha1/BareMetalHost --helm-api-versions metal3.io/v1alpha1/DataImage --helm-api-versions metal3.io/v1alpha1/FirmwareSchema --helm-api-versions metal3.io/v1alpha1/HardwareData --helm-api-versions metal3.io/v1alpha1/HostFirmwareComponents --helm-api-versions metal3.io/v1alpha1/HostFirmwareSettings --helm-api-versions metal3.io/v1alpha1/PreprovisioningImage --helm-api-versions metal3.io/v1alpha1/Provisioning --helm-api-versions migration.k8s.io/v1alpha1 --helm-api-versions migration.k8s.io/v1alpha1/StorageState --helm-api-versions migration.k8s.io/v1alpha1/StorageVersionMigration --helm-api-versions monitoring.coreos.com/v1 --helm-api-versions monitoring.coreos.com/v1/Alertmanager --helm-api-versions monitoring.coreos.com/v1/PodMonitor --helm-api-versions monitoring.coreos.com/v1/Probe --helm-api-versions monitoring.coreos.com/v1/Prometheus --helm-api-versions monitoring.coreos.com/v1/PrometheusRule --helm-api-versions monitoring.coreos.com/v1/ServiceMonitor --helm-api-versions monitoring.coreos.com/v1/ThanosRuler --helm-api-versions monitoring.coreos.com/v1alpha1 --helm-api-versions monitoring.coreos.com/v1alpha1/AlertmanagerConfig --helm-api-versions monitoring.coreos.com/v1beta1 --helm-api-versions monitoring.coreos.com/v1beta1/AlertmanagerConfig --helm-api-versions monitoring.openshift.io/v1 --helm-api-versions monitoring.openshift.io/v1/AlertRelabelConfig --helm-api-versions monitoring.openshift.io/v1/AlertingRule --helm-api-versions network.operator.openshift.io/v1 --helm-api-versions network.operator.openshift.io/v1/EgressRouter --helm-api-versions network.operator.openshift.io/v1/OperatorPKI --helm-api-versions networking.k8s.io/v1 --helm-api-versions networking.k8s.io/v1/Ingress --helm-api-versions networking.k8s.io/v1/IngressClass --helm-api-versions networking.k8s.io/v1/NetworkPolicy --helm-api-versions node.k8s.io/v1 --helm-api-versions node.k8s.io/v1/RuntimeClass --helm-api-versions oauth.openshift.io/v1 --helm-api-versions oauth.openshift.io/v1/OAuthAccessToken --helm-api-versions oauth.openshift.io/v1/OAuthAuthorizeToken --helm-api-versions oauth.openshift.io/v1/OAuthClient --helm-api-versions oauth.openshift.io/v1/OAuthClientAuthorization --helm-api-versions oauth.openshift.io/v1/UserOAuthAccessToken --helm-api-versions operator.openshift.io/v1 --helm-api-versions operator.openshift.io/v1/Authentication --helm-api-versions operator.openshift.io/v1/CSISnapshotController --helm-api-versions operator.openshift.io/v1/CloudCredential --helm-api-versions operator.openshift.io/v1/ClusterCSIDriver --helm-api-versions operator.openshift.io/v1/Config --helm-api-versions operator.openshift.io/v1/Console --helm-api-versions operator.openshift.io/v1/DNS --helm-api-versions operator.openshift.io/v1/Etcd --helm-api-versions operator.openshift.io/v1/IngressController --helm-api-versions operator.openshift.io/v1/InsightsOperator --helm-api-versions operator.openshift.io/v1/KubeAPIServer --helm-api-versions operator.openshift.io/v1/KubeControllerManager --helm-api-versions operator.openshift.io/v1/KubeScheduler --helm-api-versions operator.openshift.io/v1/KubeStorageVersionMigrator --helm-api-versions operator.openshift.io/v1/MachineConfiguration --helm-api-versions operator.openshift.io/v1/Network --helm-api-versions operator.openshift.io/v1/OpenShiftAPIServer --helm-api-versions operator.openshift.io/v1/OpenShiftControllerManager --helm-api-versions operator.openshift.io/v1/ServiceCA --helm-api-versions operator.openshift.io/v1/Storage --helm-api-versions operator.openshift.io/v1alpha1 --helm-api-versions operator.openshift.io/v1alpha1/ImageContentSourcePolicy --helm-api-versions operators.coreos.com/v1 --helm-api-versions operators.coreos.com/v1/OLMConfig --helm-api-versions operators.coreos.com/v1/Operator --helm-api-versions operators.coreos.com/v1/OperatorCondition --helm-api-versions operators.coreos.com/v1/OperatorGroup --helm-api-versions operators.coreos.com/v1alpha1 --helm-api-versions operators.coreos.com/v1alpha1/CatalogSource --helm-api-versions operators.coreos.com/v1alpha1/ClusterServiceVersion --helm-api-versions operators.coreos.com/v1alpha1/InstallPlan --helm-api-versions operators.coreos.com/v1alpha1/Subscription --helm-api-versions operators.coreos.com/v1alpha2 --helm-api-versions operators.coreos.com/v1alpha2/OperatorGroup --helm-api-versions operators.coreos.com/v2 --helm-api-versions operators.coreos.com/v2/OperatorCondition --helm-api-versions performance.openshift.io/v1 --helm-api-versions performance.openshift.io/v1/PerformanceProfile --helm-api-versions performance.openshift.io/v1alpha1 --helm-api-versions performance.openshift.io/v1alpha1/PerformanceProfile --helm-api-versions performance.openshift.io/v2 --helm-api-versions performance.openshift.io/v2/PerformanceProfile --helm-api-versions pipelines.openshift.io/v1alpha1 --helm-api-versions pipelines.openshift.io/v1alpha1/GitopsService --helm-api-versions policy.networking.k8s.io/v1alpha1 --helm-api-versions policy.networking.k8s.io/v1alpha1/AdminNetworkPolicy --helm-api-versions policy.networking.k8s.io/v1alpha1/BaselineAdminNetworkPolicy --helm-api-versions policy/v1 --helm-api-versions policy/v1/PodDisruptionBudget --helm-api-versions projctl.konflux.dev/v1beta1 --helm-api-versions projctl.konflux.dev/v1beta1/Project --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStream --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStreamTemplate --helm-api-versions project.openshift.io/v1 --helm-api-versions project.openshift.io/v1/Project --helm-api-versions quota.openshift.io/v1 --helm-api-versions quota.openshift.io/v1/ClusterResourceQuota --helm-api-versions rbac.authorization.k8s.io/v1 --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRole --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --helm-api-versions rbac.authorization.k8s.io/v1/Role --helm-api-versions rbac.authorization.k8s.io/v1/RoleBinding --helm-api-versions route.openshift.io/v1 --helm-api-versions route.openshift.io/v1/Route --helm-api-versions samples.operator.openshift.io/v1 --helm-api-versions samples.operator.openshift.io/v1/Config --helm-api-versions scheduling.k8s.io/v1 --helm-api-versions scheduling.k8s.io/v1/PriorityClass --helm-api-versions security.internal.openshift.io/v1 --helm-api-versions security.internal.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1 --helm-api-versions security.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1/SecurityContextConstraints --helm-api-versions sinks.knative.dev/v1alpha1 --helm-api-versions sinks.knative.dev/v1alpha1/JobSink --helm-api-versions snapshot.storage.k8s.io/v1 --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --helm-api-versions sources.knative.dev/v1beta2 --helm-api-versions sources.knative.dev/v1beta2/PingSource --helm-api-versions storage.k8s.io/v1 --helm-api-versions storage.k8s.io/v1/CSIDriver --helm-api-versions storage.k8s.io/v1/CSINode --helm-api-versions storage.k8s.io/v1/CSIStorageCapacity --helm-api-versions storage.k8s.io/v1/StorageClass --helm-api-versions storage.k8s.io/v1/VolumeAttachment --helm-api-versions template.openshift.io/v1 --helm-api-versions template.openshift.io/v1/BrokerTemplateInstance --helm-api-versions template.openshift.io/v1/Template --helm-api-versions template.openshift.io/v1/TemplateInstance --helm-api-versions tuned.openshift.io/v1 --helm-api-versions tuned.openshift.io/v1/Profile --helm-api-versions tuned.openshift.io/v1/Tuned --helm-api-versions user.openshift.io/v1 --helm-api-versions user.openshift.io/v1/Group --helm-api-versions user.openshift.io/v1/Identity --helm-api-versions user.openshift.io/v1/User --helm-api-versions v1 --helm-api-versions v1/ConfigMap --helm-api-versions v1/Endpoints --helm-api-versions v1/Event --helm-api-versions v1/LimitRange --helm-api-versions v1/Namespace --helm-api-versions v1/Node --helm-api-versions v1/PersistentVolume --helm-api-versions v1/PersistentVolumeClaim --helm-api-versions v1/Pod --helm-api-versions v1/PodTemplate --helm-api-versions v1/ReplicationController --helm-api-versions v1/ResourceQuota --helm-api-versions v1/Secret --helm-api-versions v1/Service --helm-api-versions v1/ServiceAccount --helm-api-versions whereabouts.cni.cncf.io/v1alpha1 --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/IPPool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/NodeSlicePool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/OverlappingRangeIPReservation' failed exit status 1: Error: replace operation does not apply: doc is missing path: /spec/persistentVolumeReclaimPolicy/whenScaled: missing value","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 663
Error: error when bootstrapping cluster: reached maximum number of attempts (2). error: exit status 1
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/run.log line 665
make: *** [Makefile:25: ci/test/e2e] Error 1

Analysis powered by prow-failure-analysis | Build: 2018992922233409536

@olegbet olegbet changed the title WIP: ingester and querier pods are getting OOMKilled on stone-prod-p02 ingester and querier pods are getting OOMKilled on stone-prod-p02 Feb 6, 2026
Signed-off-by: obetsun <[email protected]>
Assisted-by: Claude

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
…tefulSet

Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
Signed-off-by: obetsun <[email protected]>

rh-pre-commit.version: 2.3.2
rh-pre-commit.check-secrets: ENABLED
@olegbet olegbet force-pushed the stone-prod-p02_ingester_querier_oomkilled branch from f19f83b to 57bafde Compare February 11, 2026 19:42
@konflux-ci-qe-bot
Copy link

🤖 Pipeline Failure Analysis

Category: Infrastructure

The end-to-end tests failed because Argo CD applications could not synchronize due to a persistent volume reclaim policy configuration error, preventing cluster bootstrapping.

📋 Technical Details

Immediate Cause

The Argo CD applications policies-in-cluster-local and vector-kubearchive-log-collector-in-cluster-local failed to synchronize. This was caused by a ComparisonError stemming from a missing persistentVolumeReclaimPolicy in the desired state when attempting to apply changes via server-side apply.

Contributing Factors

The failure in Argo CD synchronization directly impacted the bootstrapping process for the end-to-end tests. The error message "error calculating server side diff: serverSideDiff error: error running server side apply in dryrun mode for resource Namespace/konflux-policies: Internal error occurred: failed calling webhook \"namespace.operator.tekton.dev\": failed to call webhook: Post \"https://tekton-operator-proxy-webhook.openshift-pipelines.svc:443/namespace-validation?timeout=10s\\\": no endpoints available for service \"tekton-operator-proxy-webhook\"" suggests an issue with webhook communication, potentially related to the Tekton operator or its dependencies, which is a consequence of the initial synchronization failure.

Impact

The inability of Argo CD to synchronize critical applications prevented the cluster from being properly provisioned and configured for the end-to-end tests. As a result, the cluster bootstrapping process failed after reaching the maximum number of attempts, leading to the overall failure of the appstudio-e2e-tests/redhat-appstudio-e2e step and consequently the Prow job.

🔍 Evidence

appstudio-e2e-tests/redhat-appstudio-e2e

Category: infrastructure
Root Cause: The Argo CD applications failed to synchronize due to a configuration error related to persistent volume reclaim policy, preventing the cluster from being properly bootstrapped for the end-to-end tests. This indicates an underlying infrastructure or configuration issue within the cluster setup or the Argo CD deployment itself.

Logs:

artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 399
policies-in-cluster-local failed with:
[{"lastTransitionTime":"2026-02-11T19:55:37Z","message":"Failed to compare desired state to live state: failed to calculate diff: error calculating server side diff: serverSideDiff error: error running server side apply in dryrun mode for resource Namespace/konflux-policies: Internal error occurred: failed calling webhook \"namespace.operator.tekton.dev\": failed to call webhook: Post \"https://tekton-operator-proxy-webhook.openshift-pipelines.svc:443/namespace-validation?timeout=10s\": no endpoints available for service \"tekton-operator-proxy-webhook\"","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 407
vector-kubearchive-log-collector-in-cluster-local failed with:
[{"lastTransitionTime":"2026-02-11T19:55:29Z","message":"Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = 'kustomize build \u003cpath to cached source\u003e/components/vector-kubearchive-log-collector/development --enable-helm --helm-kube-version 1.30 --helm-api-versions admissionregistration.k8s.io/v1 --helm-api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1/ValidatingAdmissionPolicyBinding --helm-api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --helm-api-versions admissionregistration.k8s.io/v1beta1 --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicy --helm-api-versions admissionregistration.k8s.io/v1beta1/ValidatingAdmissionPolicyBinding --helm-api-versions apiextensions.crossplane.io/v1 --helm-api-versions apiextensions.crossplane.io/v1/CompositeResourceDefinition --helm-api-versions apiextensions.crossplane.io/v1/Composition --helm-api-versions apiextensions.crossplane.io/v1/CompositionRevision --helm-api-versions apiextensions.crossplane.io/v1alpha1 --helm-api-versions apiextensions.crossplane.io/v1alpha1/ManagedResourceActivationPolicy --helm-api-versions apiextensions.crossplane.io/v1alpha1/ManagedResourceDefinition --helm-api-versions apiextensions.crossplane.io/v1alpha1/Usage --helm-api-versions apiextensions.crossplane.io/v1beta1 --helm-api-versions apiextensions.crossplane.io/v1beta1/EnvironmentConfig --helm-api-versions apiextensions.crossplane.io/v1beta1/Usage --helm-api-versions apiextensions.crossplane.io/v2 --helm-api-versions apiextensions.crossplane.io/v2/CompositeResourceDefinition --helm-api-versions apiextensions.k8s.io/v1 --helm-api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --helm-api-versions apiregistration.k8s.io/v1 --helm-api-versions apiregistration.k8s.io/v1/APIService --helm-api-versions apiserver.openshift.io/v1 --helm-api-versions apiserver.openshift.io/v1/APIRequestCount --helm-api-versions apps.openshift.io/v1 --helm-api-versions apps.openshift.io/v1/DeploymentConfig --helm-api-versions apps/v1 --helm-api-versions apps/v1/ControllerRevision --helm-api-versions apps/v1/DaemonSet --helm-api-versions apps/v1/Deployment --helm-api-versions apps/v1/ReplicaSet --helm-api-versions apps/v1/StatefulSet --helm-api-versions appstudio.redhat.com/v1alpha1 --helm-api-versions appstudio.redhat.com/v1alpha1/Application --helm-api-versions appstudio.redhat.com/v1alpha1/Component --helm-api-versions appstudio.redhat.com/v1alpha1/ComponentDetectionQuery --helm-api-versions appstudio.redhat.com/v1alpha1/DependencyUpdateCheck --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTarget --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClaim --helm-api-versions appstudio.redhat.com/v1alpha1/DeploymentTargetClass --helm-api-versions appstudio.redhat.com/v1alpha1/EnterpriseContractPolicy --helm-api-versions appstudio.redhat.com/v1alpha1/Environment --helm-api-versions appstudio.redhat.com/v1alpha1/ImageRepository --helm-api-versions appstudio.redhat.com/v1alpha1/PromotionRun --helm-api-versions appstudio.redhat.com/v1alpha1/Release --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlan --helm-api-versions appstudio.redhat.com/v1alpha1/ReleasePlanAdmission --helm-api-versions appstudio.redhat.com/v1alpha1/ReleaseServiceConfig --helm-api-versions appstudio.redhat.com/v1alpha1/Snapshot --helm-api-versions appstudio.redhat.com/v1alpha1/SnapshotEnvironmentBinding --helm-api-versions argoproj.io/v1alpha1 --helm-api-versions argoproj.io/v1alpha1/AnalysisRun --helm-api-versions argoproj.io/v1alpha1/AnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/AppProject --helm-api-versions argoproj.io/v1alpha1/Application --helm-api-versions argoproj.io/v1alpha1/ApplicationSet --helm-api-versions argoproj.io/v1alpha1/ArgoCD --helm-api-versions argoproj.io/v1alpha1/ClusterAnalysisTemplate --helm-api-versions argoproj.io/v1alpha1/Experiment --helm-api-versions argoproj.io/v1alpha1/NotificationsConfiguration --helm-api-versions argoproj.io/v1alpha1/Rollout --helm-api-versions argoproj.io/v1alpha1/RolloutManager --helm-api-versions argoproj.io/v1beta1 --helm-api-versions argoproj.io/v1beta1/ArgoCD --helm-api-versions authorization.openshift.io/v1 --helm-api-versions authorization.openshift.io/v1/RoleBindingRestriction --helm-api-versions autoscaling.openshift.io/v1 --helm-api-versions autoscaling.openshift.io/v1/ClusterAutoscaler --helm-api-versions autoscaling.openshift.io/v1beta1 --helm-api-versions autoscaling.openshift.io/v1beta1/MachineAutoscaler --helm-api-versions autoscaling/v1 --helm-api-versions autoscaling/v1/HorizontalPodAutoscaler --helm-api-versions autoscaling/v2 --helm-api-versions autoscaling/v2/HorizontalPodAutoscaler --helm-api-versions batch/v1 --helm-api-versions batch/v1/CronJob --helm-api-versions batch/v1/Job --helm-api-versions build.openshift.io/v1 --helm-api-versions build.openshift.io/v1/Build --helm-api-versions build.openshift.io/v1/BuildConfig --helm-api-versions certificates.k8s.io/v1 --helm-api-versions certificates.k8s.io/v1/CertificateSigningRequest --helm-api-versions cloud.network.openshift.io/v1 --helm-api-versions cloud.network.openshift.io/v1/CloudPrivateIPConfig --helm-api-versions cloudcredential.openshift.io/v1 --helm-api-versions cloudcredential.openshift.io/v1/CredentialsRequest --helm-api-versions config.openshift.io/v1 --helm-api-versions config.openshift.io/v1/APIServer --helm-api-versions config.openshift.io/v1/Authentication --helm-api-versions config.openshift.io/v1/Build --helm-api-versions config.openshift.io/v1/ClusterOperator --helm-api-versions config.openshift.io/v1/ClusterVersion --helm-api-versions config.openshift.io/v1/Console --helm-api-versions config.openshift.io/v1/DNS --helm-api-versions config.openshift.io/v1/FeatureGate --helm-api-versions config.openshift.io/v1/Image --helm-api-versions config.openshift.io/v1/ImageContentPolicy --helm-api-versions config.openshift.io/v1/ImageDigestMirrorSet --helm-api-versions config.openshift.io/v1/ImageTagMirrorSet --helm-api-versions config.openshift.io/v1/Infrastructure --helm-api-versions config.openshift.io/v1/Ingress --helm-api-versions config.openshift.io/v1/Network --helm-api-versions config.openshift.io/v1/Node --helm-api-versions config.openshift.io/v1/OAuth --helm-api-versions config.openshift.io/v1/OperatorHub --helm-api-versions config.openshift.io/v1/Project --helm-api-versions config.openshift.io/v1/Proxy --helm-api-versions config.openshift.io/v1/Scheduler --helm-api-versions console.openshift.io/v1 --helm-api-versions console.openshift.io/v1/ConsoleCLIDownload --helm-api-versions console.openshift.io/v1/ConsoleExternalLogLink --helm-api-versions console.openshift.io/v1/ConsoleLink --helm-api-versions console.openshift.io/v1/ConsoleNotification --helm-api-versions console.openshift.io/v1/ConsolePlugin --helm-api-versions console.openshift.io/v1/ConsoleQuickStart --helm-api-versions console.openshift.io/v1/ConsoleSample --helm-api-versions console.openshift.io/v1/ConsoleYAMLSample --helm-api-versions console.openshift.io/v1alpha1 --helm-api-versions console.openshift.io/v1alpha1/ConsolePlugin --helm-api-versions controlplane.operator.openshift.io/v1alpha1 --helm-api-versions controlplane.operator.openshift.io/v1alpha1/PodNetworkConnectivityCheck --helm-api-versions coordination.k8s.io/v1 --helm-api-versions coordination.k8s.io/v1/Lease --helm-api-versions discovery.k8s.io/v1 --helm-api-versions discovery.k8s.io/v1/EndpointSlice --helm-api-versions eventing.knative.dev/v1 --helm-api-versions eventing.knative.dev/v1/Broker --helm-api-versions eventing.knative.dev/v1/Trigger --helm-api-versions eventing.knative.dev/v1alpha1 --helm-api-versions eventing.knative.dev/v1alpha1/EventPolicy --helm-api-versions events.k8s.io/v1 --helm-api-versions events.k8s.io/v1/Event --helm-api-versions flowcontrol.apiserver.k8s.io/v1 --helm-api-versions flowcontrol.apiserver.k8s.io/v1/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1/PriorityLevelConfiguration --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3 --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/FlowSchema --helm-api-versions flowcontrol.apiserver.k8s.io/v1beta3/PriorityLevelConfiguration --helm-api-versions flows.knative.dev/v1 --helm-api-versions flows.knative.dev/v1/Parallel --helm-api-versions flows.knative.dev/v1/Sequence --helm-api-versions helm.openshift.io/v1beta1 --helm-api-versions helm.openshift.io/v1beta1/HelmChartRepository --helm-api-versions helm.openshift.io/v1beta1/ProjectHelmChartRepository --helm-api-versions image.openshift.io/v1 --helm-api-versions image.openshift.io/v1/Image --helm-api-versions image.openshift.io/v1/ImageStream --helm-api-versions imageregistry.operator.openshift.io/v1 --helm-api-versions imageregistry.operator.openshift.io/v1/Config --helm-api-versions imageregistry.operator.openshift.io/v1/ImagePruner --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5 --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1alpha5/Metal3RemediationTemplate --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1 --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3Remediation --helm-api-versions infrastructure.cluster.x-k8s.io/v1beta1/Metal3RemediationTemplate --helm-api-versions ingress.operator.openshift.io/v1 --helm-api-versions ingress.operator.openshift.io/v1/DNSRecord --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1 --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1alpha1/IPAddressClaim --helm-api-versions ipam.cluster.x-k8s.io/v1beta1 --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddress --helm-api-versions ipam.cluster.x-k8s.io/v1beta1/IPAddressClaim --helm-api-versions k8s.cni.cncf.io/v1 --helm-api-versions k8s.cni.cncf.io/v1/NetworkAttachmentDefinition --helm-api-versions k8s.ovn.org/v1 --helm-api-versions k8s.ovn.org/v1/AdminPolicyBasedExternalRoute --helm-api-versions k8s.ovn.org/v1/EgressFirewall --helm-api-versions k8s.ovn.org/v1/EgressIP --helm-api-versions k8s.ovn.org/v1/EgressQoS --helm-api-versions k8s.ovn.org/v1/EgressService --helm-api-versions kubearchive.org/v1 --helm-api-versions kubearchive.org/v1/ClusterKubeArchiveConfig --helm-api-versions kubearchive.org/v1/ClusterVacuumConfig --helm-api-versions kubearchive.org/v1/KubeArchiveConfig --helm-api-versions kubearchive.org/v1/NamespaceVacuumConfig --helm-api-versions kubearchive.org/v1/SinkFilter --helm-api-versions kueue.openshift.io/v1 --helm-api-versions kueue.openshift.io/v1/Kueue --helm-api-versions kueue.x-k8s.io/v1beta1 --helm-api-versions kueue.x-k8s.io/v1beta1/AdmissionCheck --helm-api-versions kueue.x-k8s.io/v1beta1/ClusterQueue --helm-api-versions kueue.x-k8s.io/v1beta1/Cohort --helm-api-versions kueue.x-k8s.io/v1beta1/LocalQueue --helm-api-versions kueue.x-k8s.io/v1beta1/MultiKueueCluster --helm-api-versions kueue.x-k8s.io/v1beta1/MultiKueueConfig --helm-api-versions kueue.x-k8s.io/v1beta1/ProvisioningRequestConfig --helm-api-versions kueue.x-k8s.io/v1beta1/ResourceFlavor --helm-api-versions kueue.x-k8s.io/v1beta1/Topology --helm-api-versions kueue.x-k8s.io/v1beta1/Workload --helm-api-versions kueue.x-k8s.io/v1beta1/WorkloadPriorityClass --helm-api-versions machine.openshift.io/v1 --helm-api-versions machine.openshift.io/v1/ControlPlaneMachineSet --helm-api-versions machine.openshift.io/v1beta1 --helm-api-versions machine.openshift.io/v1beta1/Machine --helm-api-versions machine.openshift.io/v1beta1/MachineHealthCheck --helm-api-versions machine.openshift.io/v1beta1/MachineSet --helm-api-versions machineconfiguration.openshift.io/v1 --helm-api-versions machineconfiguration.openshift.io/v1/ContainerRuntimeConfig --helm-api-versions machineconfiguration.openshift.io/v1/ControllerConfig --helm-api-versions machineconfiguration.openshift.io/v1/KubeletConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfig --helm-api-versions machineconfiguration.openshift.io/v1/MachineConfigPool --helm-api-versions messaging.knative.dev/v1 --helm-api-versions messaging.knative.dev/v1/Channel --helm-api-versions messaging.knative.dev/v1/Subscription --helm-api-versions metal3.io/v1alpha1 --helm-api-versions metal3.io/v1alpha1/BMCEventSubscription --helm-api-versions metal3.io/v1alpha1/BareMetalHost --helm-api-versions metal3.io/v1alpha1/DataImage --helm-api-versions metal3.io/v1alpha1/FirmwareSchema --helm-api-versions metal3.io/v1alpha1/HardwareData --helm-api-versions metal3.io/v1alpha1/HostFirmwareComponents --helm-api-versions metal3.io/v1alpha1/HostFirmwareSettings --helm-api-versions metal3.io/v1alpha1/PreprovisioningImage --helm-api-versions metal3.io/v1alpha1/Provisioning --helm-api-versions migration.k8s.io/v1alpha1 --helm-api-versions migration.k8s.io/v1alpha1/StorageState --helm-api-versions migration.k8s.io/v1alpha1/StorageVersionMigration --helm-api-versions monitoring.coreos.com/v1 --helm-api-versions monitoring.coreos.com/v1/Alertmanager --helm-api-versions monitoring.coreos.com/v1/PodMonitor --helm-api-versions monitoring.coreos.com/v1/Probe --helm-api-versions monitoring.coreos.com/v1/Prometheus --helm-api-versions monitoring.coreos.com/v1/PrometheusRule --helm-api-versions monitoring.coreos.com/v1/ServiceMonitor --helm-api-versions monitoring.coreos.com/v1/ThanosRuler --helm-api-versions monitoring.coreos.com/v1alpha1 --helm-api-versions monitoring.coreos.com/v1alpha1/AlertmanagerConfig --helm-api-versions monitoring.coreos.com/v1beta1 --helm-api-versions monitoring.coreos.com/v1beta1/AlertmanagerConfig --helm-api-versions monitoring.openshift.io/v1 --helm-api-versions monitoring.openshift.io/v1/AlertRelabelConfig --helm-api-versions monitoring.openshift.io/v1/AlertingRule --helm-api-versions network.operator.openshift.io/v1 --helm-api-versions network.operator.openshift.io/v1/EgressRouter --helm-api-versions network.operator.openshift.io/v1/OperatorPKI --helm-api-versions networking.k8s.io/v1 --helm-api-versions networking.k8s.io/v1/Ingress --helm-api-versions networking.k8s.io/v1/IngressClass --helm-api-versions networking.k8s.io/v1/NetworkPolicy --helm-api-versions node.k8s.io/v1 --helm-api-versions node.k8s.io/v1/RuntimeClass --helm-api-versions oauth.openshift.io/v1 --helm-api-versions oauth.openshift.io/v1/OAuthAccessToken --helm-api-versions oauth.openshift.io/v1/OAuthAuthorizeToken --helm-api-versions oauth.openshift.io/v1/OAuthClient --helm-api-versions oauth.openshift.io/v1/OAuthClientAuthorization --helm-api-versions oauth.openshift.io/v1/UserOAuthAccessToken --helm-api-versions operator.openshift.io/v1 --helm-api-versions operator.openshift.io/v1/Authentication --helm-api-versions operator.openshift.io/v1/CSISnapshotController --helm-api-versions operator.openshift.io/v1/CloudCredential --helm-api-versions operator.openshift.io/v1/ClusterCSIDriver --helm-api-versions operator.openshift.io/v1/Config --helm-api-versions operator.openshift.io/v1/Console --helm-api-versions operator.openshift.io/v1/DNS --helm-api-versions operator.openshift.io/v1/Etcd --helm-api-versions operator.openshift.io/v1/IngressController --helm-api-versions operator.openshift.io/v1/InsightsOperator --helm-api-versions operator.openshift.io/v1/KubeAPIServer --helm-api-versions operator.openshift.io/v1/KubeControllerManager --helm-api-versions operator.openshift.io/v1/KubeScheduler --helm-api-versions operator.openshift.io/v1/KubeStorageVersionMigrator --helm-api-versions operator.openshift.io/v1/MachineConfiguration --helm-api-versions operator.openshift.io/v1/Network --helm-api-versions operator.openshift.io/v1/OpenShiftAPIServer --helm-api-versions operator.openshift.io/v1/OpenShiftControllerManager --helm-api-versions operator.openshift.io/v1/ServiceCA --helm-api-versions operator.openshift.io/v1/Storage --helm-api-versions operator.openshift.io/v1alpha1 --helm-api-versions operator.openshift.io/v1alpha1/ImageContentSourcePolicy --helm-api-versions operators.coreos.com/v1 --helm-api-versions operators.coreos.com/v1/OLMConfig --helm-api-versions operators.coreos.com/v1/Operator --helm-api-versions operators.coreos.com/v1/OperatorCondition --helm-api-versions operators.coreos.com/v1/OperatorGroup --helm-api-versions operators.coreos.com/v1alpha1 --helm-api-versions operators.coreos.com/v1alpha1/CatalogSource --helm-api-versions operators.coreos.com/v1alpha1/ClusterServiceVersion --helm-api-versions operators.coreos.com/v1alpha1/InstallPlan --helm-api-versions operators.coreos.com/v1alpha1/Subscription --helm-api-versions operators.coreos.com/v1alpha2 --helm-api-versions operators.coreos.com/v1alpha2/OperatorGroup --helm-api-versions operators.coreos.com/v2 --helm-api-versions operators.coreos.com/v2/OperatorCondition --helm-api-versions ops.crossplane.io/v1alpha1 --helm-api-versions ops.crossplane.io/v1alpha1/CronOperation --helm-api-versions ops.crossplane.io/v1alpha1/Operation --helm-api-versions ops.crossplane.io/v1alpha1/WatchOperation --helm-api-versions performance.openshift.io/v1 --helm-api-versions performance.openshift.io/v1/PerformanceProfile --helm-api-versions performance.openshift.io/v1alpha1 --helm-api-versions performance.openshift.io/v1alpha1/PerformanceProfile --helm-api-versions performance.openshift.io/v2 --helm-api-versions performance.openshift.io/v2/PerformanceProfile --helm-api-versions pipelines.openshift.io/v1alpha1 --helm-api-versions pipelines.openshift.io/v1alpha1/GitopsService --helm-api-versions pipelinesascode.tekton.dev/v1alpha1 --helm-api-versions pipelinesascode.tekton.dev/v1alpha1/Repository --helm-api-versions pkg.crossplane.io/v1 --helm-api-versions pkg.crossplane.io/v1/Configuration --helm-api-versions pkg.crossplane.io/v1/ConfigurationRevision --helm-api-versions pkg.crossplane.io/v1/Function --helm-api-versions pkg.crossplane.io/v1/FunctionRevision --helm-api-versions pkg.crossplane.io/v1/Provider --helm-api-versions pkg.crossplane.io/v1/ProviderRevision --helm-api-versions pkg.crossplane.io/v1beta1 --helm-api-versions pkg.crossplane.io/v1beta1/DeploymentRuntimeConfig --helm-api-versions pkg.crossplane.io/v1beta1/Function --helm-api-versions pkg.crossplane.io/v1beta1/FunctionRevision --helm-api-versions pkg.crossplane.io/v1beta1/ImageConfig --helm-api-versions pkg.crossplane.io/v1beta1/Lock --helm-api-versions policy.networking.k8s.io/v1alpha1 --helm-api-versions policy.networking.k8s.io/v1alpha1/AdminNetworkPolicy --helm-api-versions policy.networking.k8s.io/v1alpha1/BaselineAdminNetworkPolicy --helm-api-versions policy/v1 --helm-api-versions policy/v1/PodDisruptionBudget --helm-api-versions projctl.konflux.dev/v1beta1 --helm-api-versions projctl.konflux.dev/v1beta1/Project --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStream --helm-api-versions projctl.konflux.dev/v1beta1/ProjectDevelopmentStreamTemplate --helm-api-versions project.openshift.io/v1 --helm-api-versions project.openshift.io/v1/Project --helm-api-versions protection.crossplane.io/v1beta1 --helm-api-versions protection.crossplane.io/v1beta1/ClusterUsage --helm-api-versions protection.crossplane.io/v1beta1/Usage --helm-api-versions quota.openshift.io/v1 --helm-api-versions quota.openshift.io/v1/ClusterResourceQuota --helm-api-versions rbac.authorization.k8s.io/v1 --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRole --helm-api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --helm-api-versions rbac.authorization.k8s.io/v1/Role --helm-api-versions rbac.authorization.k8s.io/v1/RoleBinding --helm-api-versions route.openshift.io/v1 --helm-api-versions route.openshift.io/v1/Route --helm-api-versions samples.operator.openshift.io/v1 --helm-api-versions samples.operator.openshift.io/v1/Config --helm-api-versions scheduling.k8s.io/v1 --helm-api-versions scheduling.k8s.io/v1/PriorityClass --helm-api-versions security.internal.openshift.io/v1 --helm-api-versions security.internal.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1 --helm-api-versions security.openshift.io/v1/RangeAllocation --helm-api-versions security.openshift.io/v1/SecurityContextConstraints --helm-api-versions snapshot.storage.k8s.io/v1 --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --helm-api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --helm-api-versions sources.knative.dev/v1 --helm-api-versions sources.knative.dev/v1/ApiServerSource --helm-api-versions sources.knative.dev/v1/ContainerSource --helm-api-versions sources.knative.dev/v1/PingSource --helm-api-versions sources.knative.dev/v1/SinkBinding --helm-api-versions sources.knative.dev/v1beta2 --helm-api-versions sources.knative.dev/v1beta2/PingSource --helm-api-versions storage.k8s.io/v1 --helm-api-versions storage.k8s.io/v1/CSIDriver --helm-api-versions storage.k8s.io/v1/CSINode --helm-api-versions storage.k8s.io/v1/CSIStorageCapacity --helm-api-versions storage.k8s.io/v1/StorageClass --helm-api-versions storage.k8s.io/v1/VolumeAttachment --helm-api-versions template.openshift.io/v1 --helm-api-versions template.openshift.io/v1/BrokerTemplateInstance --helm-api-versions template.openshift.io/v1/Template --helm-api-versions template.openshift.io/v1/TemplateInstance --helm-api-versions tempo.grafana.com/v1alpha1 --helm-api-versions tempo.grafana.com/v1alpha1/TempoMonolithic --helm-api-versions tempo.grafana.com/v1alpha1/TempoStack --helm-api-versions triggers.tekton.dev/v1alpha1 --helm-api-versions triggers.tekton.dev/v1alpha1/ClusterInterceptor --helm-api-versions triggers.tekton.dev/v1alpha1/ClusterTriggerBinding --helm-api-versions triggers.tekton.dev/v1alpha1/EventListener --helm-api-versions triggers.tekton.dev/v1alpha1/Interceptor --helm-api-versions triggers.tekton.dev/v1alpha1/Trigger --helm-api-versions triggers.tekton.dev/v1alpha1/TriggerBinding --helm-api-versions triggers.tekton.dev/v1alpha1/TriggerTemplate --helm-api-versions triggers.tekton.dev/v1beta1 --helm-api-versions triggers.tekton.dev/v1beta1/ClusterTriggerBinding --helm-api-versions triggers.tekton.dev/v1beta1/EventListener --helm-api-versions triggers.tekton.dev/v1beta1/Trigger --helm-api-versions triggers.tekton.dev/v1beta1/TriggerBinding --helm-api-versions triggers.tekton.dev/v1beta1/TriggerTemplate --helm-api-versions tuned.openshift.io/v1 --helm-api-versions tuned.openshift.io/v1/Profile --helm-api-versions tuned.openshift.io/v1/Tuned --helm-api-versions user.openshift.io/v1 --helm-api-versions user.openshift.io/v1/Group --helm-api-versions user.openshift.io/v1/Identity --helm-api-versions user.openshift.io/v1/User --helm-api-versions v1 --helm-api-versions v1/ConfigMap --helm-api-versions v1/Endpoints --helm-api-versions v1/Event --helm-api-versions v1/LimitRange --helm-api-versions v1/Namespace --helm-api-versions v1/Node --helm-api-versions v1/PersistentVolume --helm-api-versions v1/PersistentVolumeClaim --helm-api-versions v1/Pod --helm-api-versions v1/PodTemplate --helm-api-versions v1/ReplicationController --helm-api-versions v1/ResourceQuota --helm-api-versions v1/Secret --helm-api-versions v1/Service --helm-api-versions v1/ServiceAccount --helm-api-versions whereabouts.cni.cncf.io/v1alpha1 --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/IPPool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/NodeSlicePool --helm-api-versions whereabouts.cni.cncf.io/v1alpha1/OverlappingRangeIPReservation' failed exit status 1: Error: replace operation does not apply: doc is missing path: /spec/persistentVolumeReclaimPolicy/whenScaled: missing value","type":"ComparisonError"}]
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 474
Error: error when bootstrapping cluster: reached maximum number of attempts (2). error: exit status 1
artifacts/appstudio-e2e-tests/redhat-appstudio-e2e/log.txt line 476
make: *** [Makefile:25: ci/test/e2e] Error 1

Analysis powered by prow-failure-analysis | Build: 2021671327533895680

@openshift-ci
Copy link

openshift-ci bot commented Feb 11, 2026

@olegbet: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/appstudio-e2e-tests 57bafde link true /test appstudio-e2e-tests

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants