Automatic1111 with Dreambooth on Windows 11, WSL2, Ubuntu, NVIDIA #8670
Replies: 2 comments 4 replies
-
When i run Automatic1111 on WSL2, models take like 5 minutes to load, and couldn't get an image without an error yet. Not using WSL2 seems to work fine |
Beta Was this translation helpful? Give feedback.
-
re: LD_LIBRARY_PATH - this is ok, but not really cleanest. re: WSL2 and slow model load - if your models are hosted outside of WSL's main disk (e.g. over network or anywhere using /mnt/x), then yes, load is slow since model loader attempts to use memory-mapped access to checkpoint files causing i/o storm. i've rewritten model loader on my fork to handle that differently as streaming load is much faster than memory-mapped access on anything that is non-local. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've installed auto1111 on this setup a couple times now and have had trouble using any of the official instructions or old discussions on this topic, so I wanted to jot down what worked for me in case anyone is struggling. Note the sequence is important, there are some things you want to do as root, some as a local user, and some specifically inside a conda environment. The steps are written to ensure you're doing the right thing at the right time.
Start with a fresh install of Ubuntu from the Microsoft Store. I prefer to use VS Code as my GUI, it integrates really well with WSL. Install the latest NVIDIA drivers in windows (not in WSL).
Load up WSL
update Ubuntu install
sudo apt-get update
sudo apt-get upgrade
su into a non-root user
su [your user name]
install anaconda, use the most recent version from https://repo.anaconda.com/archive/
wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh
chmod +x Anaconda3-2022.10-Linux-x86_64.sh
./Anaconda3-2022.10-Linux-x86_64.sh
delete installer (if you want) then close and reopen shell as instructed by anaconda install script
su back into your user
su [your user name]
pull down auto1111 repo and cd into directory
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
set up the "automatic" conda environment and then activate it
conda env create -f environment-wsl2.yaml
conda activate automatic
use current instructions at https://ubuntu.com/tutorials/enabling-gpu-acceleration-on-ubuntu-on-wsl2-with-the-nvidia-cuda-platform#3-install-nvidia-cuda-on-ubuntu to install CUDA in WSL, pasted below since they haven't changed in awhile
sudo apt-key del 7fa2af80
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/3bf863cc.pub
sudo add-apt-repository 'deb https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/ /'
sudo apt-get update
sudo apt-get -y install cuda
locate cuda
sudo find /usr/ -name 'libcuda.so'
This is optional, but recommended. If you don't do this, you will need to run the two lines below every time you spin up auto1111. It's less annoying in the long run to just do these steps: Create a new file /etc/profile.d/my_vars.sh, then add these lines into the new file. Adjust the paths as needed based on the output of the line you ran above to locate CUDA
export CUDA_HOME=/usr/local/cuda-12.1
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib/wsl/lib/"
chmod o+r /etc/profile.d/my_vars.sh
finally, run webui.sh, which will install everything else you need for the base install. I ran it with the xformers argument and it was installed when the script was done. I don't know if it would have installed anyway without the argument, I'd just include it to be safe, you will likely want it for dreambooth
'/home/tdtru/stable-diffusion-webui/webui.sh' --xformers
Going forward, to run auto1111, run the following lines in the terminal. You can skip the two 'export' lines if you did the steps above to have them set permanently
su [your user name]
conda activate automatic
cd stable-diffusion-webui
export CUDA_HOME=/usr/local/cuda-12.1
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib/wsl/lib/"
webui.sh --xformers
Some additional tips:
After you have a working install, export it as an image in case you ever hose it up so you can quickly get back to a working copy.
https://learn.microsoft.com/en-us/windows/wsl/basic-commands#import-and-export-a-distribution
You can increase the memory and swap space allocated WSL by creating a config file. My WSL would just occasionally crash with some memory error message and this fixed it right up for me.
https://learn.microsoft.com/en-us/windows/wsl/wsl-config#wslconfig
Here are the contents of my .wslconfig file:
[wsl2]
memory=24GB # How much memory to assign to the WSL 2 VM.
swap=32GB # How much swap space to add to the WSL 2 VM, 0 for no swap file
You can store all of your SD models in windows and use soft symbolic links to have auto1111 in WSL point to the windows folder. In WSL, all of your windows drives are mounted at /mnt/[drive letter]. So, for example, your C: drive is mounted as /mnt/c/. Using this, you can set up a symlink to your model folder in windows by typing a command similar to the below in the command line in WSL:
ln -s '/mnt/[windows drive]/[path to model folder]/[name of model folder]' '/home/[your user name]/stable-diffusion-webui/models/Stable-diffusion/[name of your model folder]'
This will make your WSL virtual disk MUCH smaller
Beta Was this translation helpful? Give feedback.
All reactions