Is there some secret to getting xformers to work on Linux? #3609
-
I've tried maybe 80 times with different steps gleaned from the internet to get xformers to work. `Python 3.10.6 (main, Oct 7 2022, 20:19:58) [GCC 11.2.0] To create a public link, set BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Any help would be appreciated. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Yeah, I've also been getting this on Linux. I'm using 1.12.1+cu116, but have seen quote a few examples opt for 1.12.1+cu113 which I'm going to try shortly. Could be related |
Beta Was this translation helpful? Give feedback.
-
Same. Couldn't run it with --xformers line in args, so I had to build it on Linux manually. This is the error I'm getting after pressing "generate|". Could not run 'xformers::efficient_attention_forward_cutlass' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'xformers::efficient_attention_forward_cutlass' is only available for these backends: [UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID]. |
Beta Was this translation helpful? Give feedback.
-
ok I solved the rubiks cube of RTX 2080 ti on linux.. Then follow the directions: |
Beta Was this translation helpful? Give feedback.
ok I solved the rubiks cube of RTX 2080 ti on linux..
It's super important you have your nvcc bin in your path in my case it was /usr/locall/cuda-11.8/bin
so:
cd to your stable-diffusion-webui base directory
source venv/bin/activate // this is important to make sure you're using the right version of python => 3.10.6 and your pip will install correctly
export PATH=$PATH: /usr/locall/cuda-11.8/bin
((Cribbed from #3525 I found the answer. ))
export FORCE_CUDA="1"
export TORCH_CUDA_ARCH_LIST=7.5
CUDA_VISIBLE_DEVICES=0
Then follow the directions:
for xformers
add the --xformers command option to webui-user.sh
and use ./webui.sh to start
Everything worked beautifully