Skip to content
This repository was archived by the owner on Oct 19, 2025. It is now read-only.

Commit c726106

Browse files
committed
Add in dependency to make torch.compile work, update nightly to work on latest upstream torch packages in repository and update README.
1 parent 55be9e7 commit c726106

File tree

3 files changed

+17
-7
lines changed

3 files changed

+17
-7
lines changed

README.md

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,22 +9,28 @@ The Docker/OCI image includes
99
- Intel Graphics driver
1010
- Basic Python virtual environment
1111

12-
Intel Extension for Pytorch (IPEX) and other python packages and dependencies will be installed upon first launch of the container. They will be installed in a Python virtual environment in a separate volume to allow for reuse between containers and to make rebuilding images in between changes a lot faster.
12+
Intel Extension for PyTorch (IPEX) and other python packages and dependencies will be installed upon first launch of the container. They will be installed in a Python virtual environment in a separate volume to allow for reuse between containers and to make rebuilding images in between changes a lot faster.
13+
14+
## Note on updates
15+
16+
IPEX is updated on a roughly three month cycle, so this repository follows that update cadence for the most part with large gaps between commits.
17+
Currently, this repository is undergoing heavy modification to support IPEX v2.5.10+xpu which will take a while longer to implement support for.
1318

1419
## Prerequisites
1520

1621
* Intel GPU which has support for Intel's oneAPI AI toolkit. According to Intel's support link [here](https://www.intel.com/content/www/us/en/developer/articles/system-requirements/intel-oneapi-ai-analytics-toolkit-system-requirements.html), the following GPUs are supported.
1722
- Intel® Data Center GPU Flex Series
1823
- Intel® Data Center GPU Max Series
1924
- Intel® Arc™ A-Series Graphics
20-
21-
There are reports that Intel® Xe GPUs (iGPU and dGPU) in Tiger Lake (11th generation) and newer Intel processors are also capable of running oneAPI but this has not been tested and it seems to rely on custom compilation of the software yourself. Feel free to file any issues if this is the case as the infrastructure is there for support to be implemented, it seems. Otherwise, any other Intel GPUs are unfortunately not supported and will need to have its support enabled by Intel for oneAPI. If you are in such a position and want to run Stable Diffusion with an older Intel GPU, ComfyUI and this repository won't be able to do that for you at this time but please take a look at Intel's OpenVINO fork of stable-diffusion-webui located [here](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) for a way to possibly do that.
2225
* Docker (Desktop) or podman
2326
* Linux or Windows, with the latest drivers installed.
2427

2528
Windows should work, but it is highly not recommended to run this unless you have a specific reason to do so i.e. needing a Linux host/userspace to run custom nodes or etc. For most purposes, doing a native install will give better speeds and less headaches. Please follow the install instructions listed in the [ComfyUI README.md](https://github.com/comfyanonymous/ComfyUI/?tab=readme-ov-file#intel-gpus)
2629
* If using Windows, you must have WSL2 set up via [this link](https://learn.microsoft.com/en-us/windows/wsl/install) in addition to Docker to be able to pass through your GPU.
2730

31+
Intel Xe GPUs (iGPU and dGPU) in Tiger Lake (11th generation) and newer Intel processors up to Intel® Arc™ A-Series Graphics and newer than up to Intel® Arc™ B-Series Graphics are also capable of running oneAPI but there are no official releases of IPEX for those GPUs. One will need to custom compile the software in that case. Feel free to file any issues for those GPUs as the infrastructure is there and similar to the official release.
32+
Otherwise, any other Intel GPUs are unfortunately not supported and will need to have its support enabled by Intel for oneAPI. If you are in such a position and want to run Stable Diffusion with an older Intel GPU, you will need to use OpenVINO or even your CPU in which case, ComfyUI and this repository won't be able to do that for you at this time. Please take a look at [this project](https://github.com/rupeshs/fastsdcpu?tab=readme-ov-file#openvino) for a way to possibly do that.
33+
2834
## Build and run the image
2935

3036
Instructions will assume Docker but podman has command compatibility so it should be easy to replace docker in these commands to run also. Run the following command in a terminal to checkout the repository and build the image.
@@ -67,14 +73,14 @@ Below is an explanation on what the above commands mean so one will know how to
6773

6874
* docker run creates and runs a new container from an image. No modification needed here.
6975
* On Linux, `--device /dev/dri` passes in your GPU from your host computer to the container as is required to enable container access to your GPU to run ComfyUI. On Windows, `--device /dev/dxg` and `-v /usr/lib/wsl:/usr/lib/wsl` are the equivalent commands to do the same thing through WSL2.
70-
* `-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). Things like Pytorch Cross Attention and BF16 are already turned on by default. Options that may help speed but impact accuracy and stability as a result include `--fp8_e4m3fn-text-enc`, `--fp8_e4m3fn-unet` and `--gpu-only`. Be aware that with the last option, offloading everything to VRAM may not be that great given that Intel Arc DG2 series cards and similar have a limitation of any one allocation being maximum 4GB in size due to hardware limitations as discussed in [here](https://github.com/oneapi-src/oneDNN/issues/1638) and one may need to use various VRAM reduction methods to actually work around this for higher resolution image generation.
76+
* `-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). Things like PyTorch Cross Attention and BF16 are already turned on by default. Options that may help speed but impact accuracy and stability as a result include `--fp8_e4m3fn-text-enc`, `--fp8_e4m3fn-unet` and `--gpu-only`. Be aware that with the last option, offloading everything to VRAM may not be that great given that Intel Arc DG2 series cards and similar have a limitation of any one allocation being maximum 4GB in size due to hardware limitations as discussed in [here](https://github.com/oneapi-src/oneDNN/issues/1638) and one may need to use various VRAM reduction methods to actually work around this for higher resolution image generation.
7177
* `-it` will let you launch the container with an interactive command line. This is highly recommended, but not mandatory, since we may need to monitor ComfyUI's output for any status changes or errors which would be made available easily by including this option.
7278
* `--name comfy-server` assigns a meaningful name (e.g. comfy-server) to the newly created container. This option is useful but not mandatory to reference your container for later uses.
7379
* `--network=host` allows the container access to your host computer's network which is needed to access ComfyUI without specifying the `--listen` argument on Linux hosts only, not Windows.
7480
* `-p 8188:8188` specifies the computer network port to pass into the container to expose access to. This needs to be used alongside the `--listen` argument on Windsows. By default, ComfyUI uses port 8188 so inside the container, this port will be forwarded to http://localhost:<host_port> on your host system. This can be changed but is not recommended for most users.
7581
* On Linux,`--security-opt=label=disable` will disable SELinux blocking access to the Docker socket in case it is configured by the Linux distribution used. It can be left out if you know your distribution doesn't use SELinux.
7682
* `-v <Directory to mount ComfyUI>:/ComfyUI:Z` specifies a directory on host to be bind-mounted to /ComfyUI directory inside the container. When you launch the container for the first time, you should specify an empty or non-existent directory on your host computer running Docker or podman, replacing `<Directory to mount ComfyUI>`, so that the container can pull the ComfyUI source code into the directory specified. The `:Z` option at the end indicates that the bind mount content is private and unshared between containers at any one time. This does limit flexibility on the image's usage but is necessary to avoid usage issues with your GPU and ComfyUI output of images. If you want to launch another container (e.g. overriding the docker or podman entrypoint) that shares the initialized ComfyUI folder, you should specify the same directory location but again, it can not be launched at the same time.
77-
* `-v <volume_name>:/deps` specifies a volume managed by Docker or podman (e.g. a volume named as I don't deps), to be mounted as /deps directory inside the container. /deps is configured as the Python virtual environment root directory (see Dockerfile: ENV venv_dir), to store all dynamic Python dependencies (e.g. Python dependency packages needed by ComfyUI or Intel's oneAPI runtime) that are referenced by ComfyUI when it starts. You can mount the deps volume to multiple containers so that those dynamic dependencies would be downloaded and installed only once. This is useful for users who want to run containers with different ComfyUI arguments (e.g. --gpu-only), and for those who actually build local images for experimenting.
83+
* `-v <volume_name>:/deps` specifies a volume managed by Docker or podman (e.g. a volume named as deps), to be mounted as /deps directory inside the container. /deps is configured as the Python virtual environment root directory (see Dockerfile: ENV venv_dir), to store all dynamic Python dependencies (e.g. Python dependency packages needed by ComfyUI or Intel's oneAPI runtime) that are referenced by ComfyUI when it starts. You can mount the deps volume to multiple containers so that those dynamic dependencies would be downloaded and installed only once. This is useful for users who want to run containers with different ComfyUI arguments (e.g. --gpu-only), and for those who actually build local images for experimenting.
7884
* The last argument `ipex-arc-comfy:latest` specifies the image, in format of <image_name>:\<tag> to use for creating the container.
7985
8086
Afterwards, one should be able to see that everything runs. To stop a container, you can run `docker stop comfy-server` to stop the container. To resume, you should run `docker start -ai comfy-server`.

startup.sh

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,9 @@ fi
2828
if [ "$FirstLaunch" = "true" ]
2929
then
3030
echo "Installing ComfyUI Python dependencies."
31-
python -m pip install torch==2.3.1+cxx11.abi torchvision==0.18.1+cxx11.abi torchaudio==2.3.1+cxx11.abi intel-extension-for-pytorch==2.3.110+xpu oneccl_bind_pt==2.3.100+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
31+
python -m pip install torch==2.3.1+cxx11.abi torchvision==0.18.1+cxx11.abi torchaudio==2.3.1+cxx11.abi intel-extension-for-pytorch==2.3.110+xpu oneccl_bind_pt==2.3.100+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
32+
# Install dependency to make torch.compile work.
33+
pip install --pre pytorch-triton-xpu==3.1.0+91b14bf559 --index-url https://download.pytorch.org/whl/nightly/xpu
3234
# Comment out the above command and uncomment the following one instead if you are a user from the PRC.
3335
#python -m pip install torch==2.3.1+cxx11.abi torchvision==0.18.1+cxx11.abi torchaudio==2.3.1+cxx11.abi intel-extension-for-pytorch==2.3.110+xpu oneccl_bind_pt==2.3.100+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/
3436
pip install -r requirements.txt

startup_nightly.sh

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,9 @@ fi
2929
if [ "$FirstLaunch" = "true" ]
3030
then
3131
echo "Installing ComfyUI Python dependencies."
32-
python -m pip install torch==2.5.0.dev20240912+xpu torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
32+
python -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
33+
# Install dependency to make torch.compile work.
34+
python -m pip install --pre pytorch-triton-xpu --index-url https://download.pytorch.org/whl/nightly/xpu
3335
pip install -r requirements.txt
3436
fi
3537

0 commit comments

Comments
 (0)