Skip to content

Conversation

@ritoban23
Copy link
Contributor

@ritoban23 ritoban23 commented Nov 13, 2025

What does this PR do?

Fixes a documentation rendering bug in trainer.fit where :rtype: None was incorrectly displayed.

Fixes #21356

Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--21362.org.readthedocs.build/en/21362/

@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Nov 13, 2025
@codecov
Copy link

codecov bot commented Nov 14, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 79%. Comparing base (f7692a6) to head (9664830).
⚠️ Report is 1 commits behind head on master.
✅ All tests successful. No failed tests found.

❗ There is a different number of reports uploaded between BASE (f7692a6) and HEAD (9664830). Click for more details.

HEAD has 125 uploads less than BASE
Flag BASE (f7692a6) HEAD (9664830)
cpu 60 29
lightning 30 14
pytest 30 0
python3.11 12 6
python3.10 6 2
lightning_fabric 15 0
python3.12 18 9
python3.12.7 18 9
python 6 3
pytorch2.1 6 5
pytest-full 30 29
Additional details and impacted files
@@            Coverage Diff            @@
##           master   #21362     +/-   ##
=========================================
- Coverage      87%      79%     -8%     
=========================================
  Files         269      266      -3     
  Lines       23765    23710     -55     
=========================================
- Hits        20591    18693   -1898     
- Misses       3174     5017   +1843     

@deependujha
Copy link
Collaborator

Hi @ritoban23, thanks for choosing to make PyTorch Lightning more awesome. Can you please look at failing doc tests?

@turbotimon
Copy link

turbotimon commented Nov 14, 2025

I'm wondering why the note is necessary and why it is not the same issue e.g. here:

See :ref:`Lightning inference section<deploy/production_basic:Predict step with your LightningModule>` for more.

However, it may would fix the problem and be more concise to just move the "For more.." sentence up, just below the parameters, like it is done in validate, test and predict:

weights_only: Defaults to ``None``. If ``True``, restricts loading to ``state_dicts`` of plain
``torch.Tensor`` and other primitive types. If loading a checkpoint from a trusted source that contains
an ``nn.Module``, use ``weights_only=False``. If loading checkpoint from an untrusted source, we
recommend using ``weights_only=True``. For more information, please refer to the
`PyTorch Developer Notes on Serialization Semantics <https://docs.pytorch.org/docs/main/notes/serialization.html#id3>`_.
For more information about multiple dataloaders, see this :ref:`section <multiple-dataloaders>`.
Returns:
List of dictionaries with metrics logged during the validation phase, e.g., in model- or callback hooks
like :meth:`~lightning.pytorch.LightningModule.validation_step` etc.
The length of the list corresponds to the number of validation dataloaders used.
Raises:
TypeError:

@bhimrazy
Copy link
Collaborator

However, it may would fix the problem and be more concise to just move the "For more.." sentence up, just below the parameters, like it is done in validate, test and predict:

Agreed, @turbotimon. Ideally this text should appear right after the parameters. The tricky part seems to be that the Returns section gets auto-inserted in between, which seems to prevent placing it exactly where we want.

@bhimrazy bhimrazy merged commit 5b52e8f into Lightning-AI:master Nov 19, 2025
84 checks passed
Iruos8805 pushed a commit to Iruos8805/pytorch-lightning that referenced this pull request Dec 4, 2025
* Fix trainer.fit docstring render bug

* retrigger:to check the real isue

* add a workaround to fix

---------

Co-authored-by: Bhimraj Yadav <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

trainer.fit render bug

5 participants