This repository was archived by the owner on Jun 29, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 13
Int8 Matmul not supported on gfx1030? #1
Copy link
Copy link
Open
Description
Attempting to use this library on a gfx1030 (6800XT) with the huggingface transformers results in:
python -m bitsandbytes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++ DEBUG INFORMATION +++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Running a quick check that:
+ library is importable
+ CUDA function is callable
SUCCESS!
Installation was successful!
Trying to load a simple huggingface transformer results in:
=============================================
ERROR: Your GPU does not support Int8 Matmul!
=============================================
python3: /dockerx/temp/bitsandbytes-rocm/csrc/ops.cu:347: int igemmlt(cublasLtHandle_t, int, int, int, const int8_t *, const int8_t *, void *, float *, int, int, int) [FORMATB = 3, DTYPE_OUT = 32, SCALE_ROWS = 0]: Assertion `false' failed.
Aborted (core dumped)
I am using Rocm 5.4.0 (I updated the library paths in the makefile to point to 5.4)
Metadata
Metadata
Assignees
Labels
No labels