Skip to content

Conversation

@deependujha
Copy link
Collaborator

@deependujha deependujha commented Nov 5, 2025

What does this PR do?

Reverts #21309
closes #21313 #21311

thanks, pr: #21341

  • This revert removes the port management logic and switches to a simpler approach for addressing port-related issues, as this is specific to standalone test environments.
  • The initial step for this change has been introduced in Lightning-AI/utilities — reference: feat: specify standalone port utilities#447
Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--21335.org.readthedocs.build/en/21335/

@github-actions github-actions bot added the fabric lightning.fabric.Fabric label Nov 5, 2025
@codecov
Copy link

codecov bot commented Nov 5, 2025

Codecov Report

❌ Patch coverage is 77.77778% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 87%. Comparing base (19912d0) to head (4927768).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21335   +/-   ##
=======================================
- Coverage      87%      87%   -0%     
=======================================
  Files         270      269    -1     
  Lines       23799    23708   -91     
=======================================
- Hits        20629    20536   -93     
- Misses       3170     3172    +2     

@bhimrazy bhimrazy mentioned this pull request Nov 5, 2025
7 tasks
@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Nov 6, 2025
@deependujha deependujha changed the title [wip]: port issue fix: port already in use & nccl errors in gpu tests Nov 6, 2025
@deependujha deependujha merged commit c913649 into Lightning-AI:master Nov 6, 2025
118 of 119 checks passed
@deependujha deependujha deleted the fix/port-issue branch November 6, 2025 08:15
@littlebullGit
Copy link
Contributor

If I understand the fix correctly, we are now rely on the test creator to explicitly set the STANDALONE_PORT env variable to avoid the port conflict? How can it prevent the port conflict if multiple test with the same range got launched at the same time and run into conflict ? Maybe I missed something? @deependujha

@deependujha
Copy link
Collaborator Author

deependujha commented Nov 7, 2025

Hi @littlebullGit,

We aren't using pytest-xdist to launch tests in parallel, but rather standalone shell script.

standalone shell script doesn't launches tests at once, but, it does it one by one, stores PID, and then keeps checking if they passed, and then launches next batch of tests.

pytest tests/.../test_file.py::test_name

Since the shell script itself is a single process, it can reliably use a list to store used ports and assign new for next test launch.

Hence, we can reliably say that multiple tests won't be launched with the same port.

@littlebullGit
Copy link
Contributor

Hi @littlebullGit,

We aren't using pytest-xdist to launch tests in parallel, but rather standalone shell script.

standalone shell script doesn't launches tests at once, but, it does it one by one, stores PID, and then keeps checking if they passed, and then launches next batch of tests.

pytest tests/.../test_file.py::test_name

Since the shell script itself is a single process, it can reliably use a list to store used ports and assign new for next test launch.

Hence, we can reliably say that multiple tests won't be launched with the same port.

Now it makes sense. Since the shell is guarantee to be the only process on the server which issue ports, it effectively does the same as the file lock approach I did for the port manager.

@deependujha
Copy link
Collaborator Author

deependujha commented Nov 7, 2025

yeah, and it was really cool.

The only issue with it was, this was not a user-facing bug, but a simple test launching issue, and shell script specifying port is a decent fix imo.

Thanks again for your time and great effort.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fabric lightning.fabric.Fabric pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants