Skip to content
This repository was archived by the owner on Apr 28, 2023. It is now read-only.

Commit dda1be0

Browse files
committed
add small notes to autograd tc tutorial
1 parent 8d3d5fa commit dda1be0

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/source/framework/pytorch_integration/autograd_with_tc.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,8 @@ Autograd with TC
33

44
We provide the TC intergation with PyTorch `autograd` so that it is easy to write
55
a training layer with TC and be able to run backwards as well if the layer is part
6-
of a network. In order to write a training layer with TC, you need to follow the
7-
steps below:
6+
of a network. We do not support double backwards right now. In order to write a
7+
training layer with TC, you need to follow the steps below:
88

99
1. Define your TC language that has two definitions: one for the forward layer and the other for the backward layer and pass it to :code:`tc.define` call. In addition, also pass :code:`training=True` and the name of the backward TC :code:`backward`.
1010

docs/source/tutorials/tutorial_tensordot_with_tc.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ to TensorDot operation.
3636

3737
A simple 2D matrix multiply operation in TC is expressed as:
3838

39-
.. code-block:: python
39+
.. code::
4040
4141
def matmul(float(M, N) X, float(N, K) W) -> (output) {
4242
output(m, k) +=! X(m, nn) * W(nn, k)
@@ -47,7 +47,7 @@ The variable :code:`nn` is being reduced in above expression. Now, let's write a
4747
**batched matrix-multiply** operation using above expression. For that, we need to
4848
add a batch dimension to it and the expression becomes:
4949

50-
.. code-block:: python
50+
.. code::
5151
5252
def batch_matmul(float(B, M, N) X, float(B, N, K) W) -> (output) {
5353
output(b, m, k) +=! X(b, m, nn) * W(b, nn, k)
@@ -56,7 +56,7 @@ add a batch dimension to it and the expression becomes:
5656
Now, for the tensordot operation, we need to add spatial dimensions :code:`H` and :code:`W`
5757
to the batched matrix multiply, and the expression for TensorDot becomes:
5858

59-
.. code-block:: python
59+
.. code::
6060
6161
def tensordot(float(B, C1, C2, H, W) I0, float(B, C2, C3, H, W) I1) -> (O) {
6262
O(b, c1, c3, h, w) +=! I0(b, c1, c2, h, w) * I1(b, c2, c3, h, w)

0 commit comments

Comments
 (0)