-
-
Notifications
You must be signed in to change notification settings - Fork 538
[DRAFT] AMD/ROCM Support #1984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[DRAFT] AMD/ROCM Support #1984
Conversation
|
Having built in ROCM support would be great!
What is unclear exactly? You run Regarding Nunchaku/SVDQ:
For local testing of both the docker image and the UI installer it can be helpful to run the local file server instead of downloading from huggingface. From repo root do To use local file server with docker pass |
|
That's helpful, thank you. For the For SVDQ, I have Nunchaku working in my custom nvidia |
Likely you're missing a |
|
@Acly Yup, that was it. I've got everything building now, but do have an infrastructure question. It seems the
Each of the above would also get a version tag as they were updated on releases. Any objections to this? |
|
You probably saw there's a base version already, I use it to eg. build the cloud images. But what's the use case for making them public? just customization? I think it's more likely people will just find one of the many ComfyUI images and add the few things Krita needs. The existing image is mainly used by people who don't know much about docker and need something that just works. I believe nginx is required to expose ComfyUI on typical hosters for the websocket-over-http at least. The other stuff also doesn't hurt much all things considered. Building on GH actions would be nice, so far the images are pushed to https://hub.docker.com/r/aclysia/sd-comfyui-krita/tags though. |
|
I updated models.json to include non-nunchaku version of Kontext, (and fp4 versions for nvidia blackwell) Also download_models.py should automatically fetch the matching set of models for detected hardware now, see 21e7183 Haven't tested this with the docker images yet though. |
|
in #2228 i have it working / installing ROCm dependencies for the diffusers pipeline. |
This DRAFT PR is under development to facilitate conversation about a formalized ROCM implementaiton in Krita AI Diffusion.
Outside of KAD, I have a functioning
Dockerfilethat supports KAD with all features of SD1.5, SDXL, and FLUX (including Kontext, but not SDVQ quant since Nunchaku does not support ROCM). This PR will port my work directly into KAD so that it can formally support ROCM going forward, but will need some guidance to complete as I'm not familiar with the code base.Working:
In progress:
Dockerfilebased on successful external installmodelsto throw errors on lack of SDVQ (if needed, not yet in PR)To do:
I'm going to assume there are a number of things in the to do list that I'm missing. My biggest question atm is there is no documentation on how to use the
docker.pyscript in a development environment to build the container, as the ComfyUI setup is done outside of theDockerfile(my custom setup does the full build in theDockerfile).