Skip to content

Commit bf2ea1d

Browse files
author
Max Andriychuk
committed
Add Max's introductory blogpost
1 parent 0024ae4 commit bf2ea1d

File tree

1 file changed

+39
-0
lines changed

1 file changed

+39
-0
lines changed
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
---
2+
title: "Activity analysis for reverse-mode differentiation of (CUDA) GPU kernels"
3+
layout: post
4+
excerpt: "A GSoC 2025 contributor project aiming to implement Activity Analysis for (CUDA) GPU kernels"
5+
sitemap: false
6+
author: Maksym Andriichuk
7+
permalink: blogs/2025_maksym_andriichuk_introduction_blog/
8+
banner_image: /images/blog/gsoc-banner.png
9+
date: 2025-07-14
10+
tags: gsoc c++ clang root auto-differentiation
11+
---
12+
13+
### Introduction
14+
Hi! I’m Maksym Andriichuk, a third-year student of JMU Wuerzburg studying Mathematics. I am exited to be a part of Clad team fo this year's Google Summer of Code.
15+
16+
### Project description
17+
My project focuses on removing atomic operations when differentiating CUDA kernels. When accessing gpu global memory inside of a gradinet of a kernel data races inevitably occur and atomic operation are used instead, due to how reverse mode differentiation works in Clad. However, in some cases we can guarantee that no data race occur which enables us to drop atomic operations and drastically speeds the execution time of the gradient.
18+
19+
### Project goals
20+
The main goals of this project are:
21+
22+
- Implement a mechanism to check whether data races occur in various scenarios.
23+
24+
- Compare Clad with other tools on benchmarks uncluding RSBench and LULESH.
25+
26+
### Implementation strategy
27+
- Solve minor CUDA-related issues to get familiar with the codebase.
28+
29+
- Implement series of visitors to distinguish between different types of scenarious where atomic operations could be dropped
30+
31+
- Use the existing benchmarks to compare the speedup from the implemented analysis.
32+
33+
## Conclusion
34+
35+
By integrating an analysis for (CUDA) GPU kernels we aim to speedup the execution of the gradient by removing atomic operation where posiible. To declare success, we would compare Clad to the other AD tools using different benchmarks. I am exited to be a part of the Clad team this summer and can not wait to share my progress.
36+
37+
### Related Links
38+
39+
- [My GitHub profile]https://github.com/ovdiiuv

0 commit comments

Comments
 (0)