Skip to content
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/workflows/beam_PreCommit_Python.yml
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,8 @@ jobs:
TC_MAX_TRIES: "15"
TC_SLEEP_TIME: "5"
# Additional gRPC stability for flaky environment
GRPC_ARG_KEEPALIVE_TIME_MS: "60000"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if there is some racing condition for tests run in parllell cc: @tvalentyn. Nevertheless we can add these environment variables at the moment.

Copy link
Copy Markdown
Contributor

@tvalentyn tvalentyn Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we can repro this stuckness locally, or ssh into GHA worker that is stuck, then we could spy on the python process with tools like pystack, and then examine stracktraces.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that could give some clues.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so should I run with SSH access on a stuck self-hosted runner and collect pystack traces there, or add temporary CI instrumentation (faulthandler + periodic pystack dumps as artifacts) so the next stuck run captures stack traces automatically? @tvalentyn

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so should I run with SSH access on a stuck self-hosted runner

is it an option? is it easy?
how often does this issue reproduce?

overall, i'd pick whatever option is more straightforward.

Copy link
Copy Markdown
Contributor

@tvalentyn tvalentyn Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good findings!
re: 1 -- is there something specific to YAML suite? do you have some suggestions (e.g. skip copying tmp?) I
re: 2 -- Could you try setting this environment variable and seeing if it helps:

GRPC_ENABLE_FORK_SUPPORT: '0'
.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re: 1 --The issue ties to YAML external providers: they clone a venv in yaml_provider via clonevirtualenv so source is either a fresh mini venv, or in .dev the whole tox venv (path from base_python) and in CI that tree is live, so paths like tmp/ can disappear mid-copy, fits what we saw so I suggesst ignore tmp/, pycache/, .pytest_cache/, pip/build temps (or replace clonevirtualenv with a copy that supports ignore / only needed dirs) and for .dev/CI, prefer _create_venv_from_scratch (or a template venv) instead of cloning the full tox env, slower, stabler. Optional unit test with a fake venv + volatile tmp/ to guard regressions.

re: 2 -- subprocess_server waits on gRPC channel ready for expansion_service_main until pytest-timeout. faulthandler showed xdist/execnet sometimes, xdist off for one Python version didn’t fix other legs, parallelism isnt the whole story so I suggesst on child exit or long wait, surface exit code + stdout/stderr (and maybe fail faster with a clear error).
For ML YAML, preinstall/pin heavy deps in tox so slow pip/import/model startup doesn’t look like a hang

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add the same env var to Python PreCommit and see if it changes the expansion / xdist behavior, will update you back whether it helps

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tvalentyn I added GRPC_ENABLE_FORK_SUPPORT: "0" in Python PreCommit and kept the YAML side stabilization in yaml_provider then I reran the workflow and it passed https://github.com/apache/beam/actions/runs/23892113364

Copy link
Copy Markdown
Contributor

@tvalentyn tvalentyn Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc: @sergiitk FYI - we might be seeing stuckness issues in other, pure python test suites, where there are processes and subprocesses communicating over GRPC.

GRPC_ARG_KEEPALIVE_TIMEOUT_MS: "60000"
GRPC_ARG_MAX_CONNECTION_IDLE_MS: "60000"
GRPC_ARG_HTTP2_BDP_PROBE: "1"
GRPC_ARG_SO_REUSEPORT: "1"
Expand All @@ -125,6 +127,7 @@ jobs:
# Additional gRPC settings
GRPC_ARG_MAX_RECONNECT_BACKOFF_MS: "120000"
GRPC_ARG_INITIAL_RECONNECT_BACKOFF_MS: "2000"
BEAM_RUNNER_BUNDLE_TIMEOUT_MS: "600000"
uses: ./.github/actions/gradle-command-self-hosted-action
with:
gradle-command: :sdks:python:test-suites:tox:py${{steps.set_py_ver_clean.outputs.py_ver_clean}}:preCommitPy${{steps.set_py_ver_clean.outputs.py_ver_clean}}
Expand Down
Loading