Skip to content

Commit 68771f1

Browse files
committed
post: Efficient Attention
Online self-attention latex
1 parent 17893ed commit 68771f1

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

_posts/DeepLearning/Kernel Fusion/2025-03-07-fused.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -165,6 +165,7 @@ attention에는 softmax 때문에 input 전체를 봐야한다.
165165

166166
self-attention에도 이를 적용한다.[[link]](https://arxiv.org/pdf/2112.05682)
167167

168+
$$ v^* \in \mathbb{R}^d $$ ,$$ s^* \in \mathbb{R} $$$$ s_i = \mathrm{dot}(q, k_i) $$ , $$ v^* \leftarrow v^* + v_i e^{s_i} $$ , $$ s^* \leftarrow s^* + e^{s_i} $$
168169

169170

170171

0 commit comments

Comments
 (0)