Replies: 1 comment
-
I also encountered a similar issue when running this version of the tutorial. My setup is the following:
I believe the issue is that the latest tutorial no longer supports Ampere GPUs and older, since it relies on TMA hardware support, which was only introduced on H100. Running an older version of the tutorial worked for me: https://github.com/triton-lang/triton/blob/8bd8035559d4d2fa387149eecdf10d545d8a3d03/python/tutorials/06-fused-attention.py |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
import triton.tools.experimental_descriptor
ModuleNotFoundError: No module named 'triton.tools.experimental_descriptor'
Triton is good. But tutorial examples seem obsolete. They are important for new comers. Can they be updated?
Beta Was this translation helpful? Give feedback.
All reactions