Skip to content

Commit f908939

Browse files
committed
Fix images path
1 parent 92bdfca commit f908939

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

_gsocblogs/2024/blog_CUDA_kernels_autodiff.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Coming back to the CUDA kernel case, unfortunately we cannot launch a kernel ins
6161
Though the first option is more desirable, it would introduce the need to know the configuration of the grid for each kernel execution at compile time, and consequently, have a separate call to `clad::gradient`
6262
for each configuration which, each time, creates the same function anew, diverging only on the kernel launch configuration. As a result, the second approach is the one followed.
6363

64-
![kernel-to-device-grad](/images/others/kernel-to-device-grad.png)
64+
![kernel-to-device-grad]({{ site.baseurl }}/images/others/kernel-to-device-grad.png){:.center-image}
6565

6666

6767
#### 2. Execution
@@ -119,7 +119,7 @@ In the reverse-mode autodiff, as previously explained, the left and right hand-s
119119
120120
An easy way around this was the use of atomic operations every time the memory addresses of the output derivatives are to be updated.
121121
122-
![atomic-add](/images/others/atomic-add.png)
122+
![atomic-add]({{ site.baseurl }}/images/others/atomic-add.png){:.center-image}
123123
124124
One thing to bear in mind that will come in handy is that atomic operations can only be applied on global memory addresses, which also makes sense because all threads have access to that memory space. All kernel arguments are inherently global, so no need to second-guess this for now.
125125

0 commit comments

Comments
 (0)