Skip to content

Add the Latest Features For Basics Autograd Tutorial #3395

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Changes from 7 commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
d37a7fb
update.
ParagEkbote Jun 12, 2025
ad5eb25
update the autograd tutorial.
ParagEkbote Jun 13, 2025
1532c0d
update the tutorial.
ParagEkbote Jun 13, 2025
4feed23
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 13, 2025
f9351d4
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 16, 2025
7609948
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 18, 2025
141aa18
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 19, 2025
4fca41c
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 25, 2025
c11b361
update the tutorial.
ParagEkbote Jun 26, 2025
86cf702
update.
ParagEkbote Jun 26, 2025
00b4978
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 26, 2025
a5c25e5
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jun 27, 2025
a77d137
update link syntax.
ParagEkbote Jun 30, 2025
87949b0
Merge branch 'Add-Latest-Features-For-Autograd-Tutorial' of https://g…
ParagEkbote Jun 30, 2025
4b736a5
use the rst syntax.
ParagEkbote Jun 30, 2025
d729cd6
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jul 1, 2025
b9576f7
fix:link syntax.
ParagEkbote Jul 1, 2025
ccc2727
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
ParagEkbote Jul 3, 2025
8cda00b
Merge branch 'main' into Add-Latest-Features-For-Autograd-Tutorial
AlannaBurke Jul 9, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 36 additions & 2 deletions beginner_source/basics/autogradqs_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
y = torch.zeros(3) # expected output
w = torch.randn(5, 3, requires_grad=True)
b = torch.randn(3, requires_grad=True)
z = torch.matmul(x, w)+b
z = torch.matmul(x, w) + b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)


Expand Down Expand Up @@ -133,7 +133,8 @@
# - To mark some parameters in your neural network as **frozen parameters**.
# - To **speed up computations** when you are only doing forward pass, because computations on tensors that do
# not track gradients would be more efficient.

# For additional reference, you can view the autograd mechanics
# documentation:https://docs.pytorch.org/docs/stable/notes/autograd.html#locally-disabling-gradient-computation

######################################################################

Expand All @@ -160,6 +161,39 @@
# - accumulates them in the respective tensor’s ``.grad`` attribute
# - using the chain rule, propagates all the way to the leaf tensors.
#
# We can also visualize the computational graph by the following 2 methods:
Copy link
Contributor

@soulitzer soulitzer Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for this section, we can just keep it short for now, and link to the relevant resource:

To get a sense of what this computational graph looks like we can use the following tools:

1. torchviz is a package to visualize computational graphs
https://github.com/szagoruyko/pytorchviz

2. TORCH_LOGS="+autograd" enables logging for the backward pass. 
https://dev-discuss.pytorch.org/t/highlighting-a-few-recent-autograd-features-h2-2023/1787

(for the links use the proper hyperlink syntax)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

#
# 1. TORCH_LOGS="+autograd"
# By setting the TORCH_LOGS="+autograd" environment variable, we can enable runtime autograd logs for debugging.
#
# We can perform the logging in the following manner:
# TORCH_LOGS="+autograd" python test.py
#
# 2. Torchviz
# Torchviz is a package to render the computational graph visually.
#
# We can generate an image for the computational graph in the example given below:
#
# import torch
# from torch import nn
# from torchviz import make_dot
#
# model = nn.Sequential(
# nn.Linear(8, 16),
# nn.ReLU(),
# nn.Linear(16, 1)
# )

# x = torch.randn(1, 8, requires_grad=True)
# y = model(x).mean()

# log the internal operations using torchviz
# import os
# os.environ['TORCH_LOGS'] = "+autograd"

# dot = make_dot(y, params=dict(model.named_parameters()), show_attrs=True, show_saved=True)
# dot.render('simple_graph', format='png')
#
# .. note::
# **DAGs are dynamic in PyTorch**
# An important thing to note is that the graph is recreated from scratch; after each
Expand Down