Skip to content

Conversation

breskeby
Copy link
Contributor

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
@breskeby breskeby requested a review from a team as a code owner September 30, 2025 08:11
@breskeby breskeby added >non-issue :Delivery/Build Build or test infrastructure Team:Delivery Meta label for Delivery team auto-backport Automatically create backport pull requests when merged v9.2.0 v8.19.5 v9.1.5 v8.18.8 v9.0.8 labels Sep 30, 2025
@breskeby breskeby self-assigned this Sep 30, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-delivery (Team:Delivery)

@breskeby breskeby merged commit d5fc11a into elastic:main Sep 30, 2025
35 checks passed
breskeby added a commit to breskeby/elasticsearch that referenced this pull request Sep 30, 2025
…ut (elastic#135672)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
breskeby added a commit to breskeby/elasticsearch that referenced this pull request Sep 30, 2025
…ut (elastic#135672)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
breskeby added a commit to breskeby/elasticsearch that referenced this pull request Sep 30, 2025
…ut (elastic#135672)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
breskeby added a commit to breskeby/elasticsearch that referenced this pull request Sep 30, 2025
…ut (elastic#135672)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
@elasticsearchmachine
Copy link
Collaborator

💚 Backport successful

Status Branch Result
8.19
9.1
8.18
9.0

elasticsearchmachine pushed a commit that referenced this pull request Sep 30, 2025
…ut (#135672) (#135682)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
elasticsearchmachine pushed a commit that referenced this pull request Sep 30, 2025
…ut (#135672) (#135684)

We see plenty of errors due to a cluster not starting up. This is likely a combination
of parallelism, memory pressure and to little timeout.

This is aggressively increasing the timeout from 40 to 120s while we will
monitor if the overall flakyness in ci is reduced
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Automatically create backport pull requests when merged :Delivery/Build Build or test infrastructure >non-issue Team:Delivery Meta label for Delivery team v8.18.8 v8.19.5 v9.0.8 v9.1.5 v9.2.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants