Skip to content

Commit 7ae350a

Browse files
committed
Add clarifying notes to polynomial tutorials about fit limitations
Addresses feedback that the 3rd-degree polynomial fit for sin(x) over [-π, π] may be confusing for beginners. Added educational notes explaining that: - The example demonstrates gradient descent mechanics, not perfect fitting - 3rd-degree polynomials have mathematical limitations for this task - Higher-order terms (5th, 7th degree) are needed for better accuracy - The imperfect fit is expected and teaches model architecture selection
1 parent 4fa1fa8 commit 7ae350a

File tree

6 files changed

+60
-0
lines changed

6 files changed

+60
-0
lines changed

beginner_source/examples_autograd/polynomial_custom_function.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,16 @@
1616
1717
In this implementation we implement our own custom autograd function to perform
1818
:math:`P_3'(x)`. By mathematics, :math:`P_3'(x)=\\frac{3}{2}\\left(5x^2-1\\right)`
19+
20+
.. note::
21+
This example is designed to demonstrate the mechanics of gradient descent and
22+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
23+
fundamental limitations in approximating :math:`\sin(x)` over the range
24+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
25+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
26+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
27+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
28+
choosing an appropriate model architecture for your problem.
1929
"""
2030
import torch
2131
import math

beginner_source/examples_nn/polynomial_module.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,16 @@
99
This implementation defines the model as a custom Module subclass. Whenever you
1010
want a model more complex than a simple sequence of existing Modules you will
1111
need to define your model this way.
12+
13+
.. note::
14+
This example is designed to demonstrate the mechanics of gradient descent and
15+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
16+
fundamental limitations in approximating :math:`\sin(x)` over the range
17+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
18+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
19+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
20+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
21+
choosing an appropriate model architecture for your problem.
1222
"""
1323
import torch
1424
import math

beginner_source/examples_nn/polynomial_nn.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,16 @@
1212
this is where the nn package can help. The nn package defines a set of Modules,
1313
which you can think of as a neural network layer that produces output from
1414
input and may have some trainable weights.
15+
16+
.. note::
17+
This example is designed to demonstrate the mechanics of gradient descent and
18+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
19+
fundamental limitations in approximating :math:`\sin(x)` over the range
20+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
21+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
22+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
23+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
24+
choosing an appropriate model architecture for your problem.
1525
"""
1626
import torch
1727
import math

beginner_source/examples_nn/polynomial_optim.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,16 @@
1212
we use the optim package to define an Optimizer that will update the weights
1313
for us. The optim package defines many optimization algorithms that are commonly
1414
used for deep learning, including SGD+momentum, RMSProp, Adam, etc.
15+
16+
.. note::
17+
This example is designed to demonstrate the mechanics of gradient descent and
18+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
19+
fundamental limitations in approximating :math:`\sin(x)` over the range
20+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
21+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
22+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
23+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
24+
choosing an appropriate model architecture for your problem.
1525
"""
1626
import torch
1727
import math

beginner_source/examples_tensor/polynomial_numpy.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,16 @@
1212
A numpy array is a generic n-dimensional array; it does not know anything about
1313
deep learning or gradients or computational graphs, and is just a way to perform
1414
generic numeric computations.
15+
16+
.. note::
17+
This example is designed to demonstrate the mechanics of gradient descent and
18+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
19+
fundamental limitations in approximating :math:`\sin(x)` over the range
20+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
21+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
22+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
23+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
24+
choosing an appropriate model architecture for your problem.
1525
"""
1626
import numpy as np
1727
import math

beginner_source/examples_tensor/polynomial_tensor.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,16 @@
1616
The biggest difference between a numpy array and a PyTorch Tensor is that
1717
a PyTorch Tensor can run on either CPU or GPU. To run operations on the GPU,
1818
just cast the Tensor to a cuda datatype.
19+
20+
.. note::
21+
This example is designed to demonstrate the mechanics of gradient descent and
22+
backpropagation, not to achieve a perfect fit. A third-degree polynomial has
23+
fundamental limitations in approximating :math:`\sin(x)` over the range
24+
:math:`[-\pi, \pi]`. The Taylor series for sine requires higher-order terms
25+
(5th, 7th degree, etc.) for better accuracy. The resulting polynomial will
26+
fit reasonably well near zero but will diverge from :math:`\sin(x)` as you
27+
approach :math:`\pm\pi`. This is expected and illustrates the importance of
28+
choosing an appropriate model architecture for your problem.
1929
"""
2030

2131
import torch

0 commit comments

Comments
 (0)