You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 19, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+11-5Lines changed: 11 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,22 +9,28 @@ The Docker/OCI image includes
9
9
- Intel Graphics driver
10
10
- Basic Python virtual environment
11
11
12
-
Intel Extension for Pytorch (IPEX) and other python packages and dependencies will be installed upon first launch of the container. They will be installed in a Python virtual environment in a separate volume to allow for reuse between containers and to make rebuilding images in between changes a lot faster.
12
+
Intel Extension for PyTorch (IPEX) and other python packages and dependencies will be installed upon first launch of the container. They will be installed in a Python virtual environment in a separate volume to allow for reuse between containers and to make rebuilding images in between changes a lot faster.
13
+
14
+
## Note on updates
15
+
16
+
IPEX is updated on a roughly three month cycle, so this repository follows that update cadence for the most part with large gaps between commits.
17
+
Currently, this repository is undergoing heavy modification to support IPEX v2.5.10+xpu which will take a while longer to implement support for.
13
18
14
19
## Prerequisites
15
20
16
21
* Intel GPU which has support for Intel's oneAPI AI toolkit. According to Intel's support link [here](https://www.intel.com/content/www/us/en/developer/articles/system-requirements/intel-oneapi-ai-analytics-toolkit-system-requirements.html), the following GPUs are supported.
17
22
- Intel® Data Center GPU Flex Series
18
23
- Intel® Data Center GPU Max Series
19
24
- Intel® Arc™ A-Series Graphics
20
-
21
-
There are reports that Intel® Xe GPUs (iGPU and dGPU) in Tiger Lake (11th generation) and newer Intel processors are also capable of running oneAPI but this has not been tested and it seems to rely on custom compilation of the software yourself. Feel free to file any issues if this is the case as the infrastructure is there for support to be implemented, it seems. Otherwise, any other Intel GPUs are unfortunately not supported and will need to have its support enabled by Intel for oneAPI. If you are in such a position and want to run Stable Diffusion with an older Intel GPU, ComfyUI and this repository won't be able to do that for you at this time but please take a look at Intel's OpenVINO fork of stable-diffusion-webui located [here](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) for a way to possibly do that.
22
25
* Docker (Desktop) or podman
23
26
* Linux or Windows, with the latest drivers installed.
24
27
25
28
Windows should work, but it is highly not recommended to run this unless you have a specific reason to do so i.e. needing a Linux host/userspace to run custom nodes or etc. For most purposes, doing a native install will give better speeds and less headaches. Please follow the install instructions listed in the [ComfyUI README.md](https://github.com/comfyanonymous/ComfyUI/?tab=readme-ov-file#intel-gpus)
26
29
* If using Windows, you must have WSL2 set up via [this link](https://learn.microsoft.com/en-us/windows/wsl/install) in addition to Docker to be able to pass through your GPU.
27
30
31
+
Intel Xe GPUs (iGPU and dGPU) in Tiger Lake (11th generation) and newer Intel processors up to Intel® Arc™ A-Series Graphics and newer than up to Intel® Arc™ B-Series Graphics are also capable of running oneAPI but there are no official releases of IPEX for those GPUs. One will need to custom compile the software in that case. Feel free to file any issues for those GPUs as the infrastructure is there and similar to the official release.
32
+
Otherwise, any other Intel GPUs are unfortunately not supported and will need to have its support enabled by Intel for oneAPI. If you are in such a position and want to run Stable Diffusion with an older Intel GPU, you will need to use OpenVINO or even your CPU in which case, ComfyUI and this repository won't be able to do that for you at this time. Please take a look at [this project](https://github.com/rupeshs/fastsdcpu?tab=readme-ov-file#openvino) for a way to possibly do that.
33
+
28
34
## Build and run the image
29
35
30
36
Instructions will assume Docker but podman has command compatibility so it should be easy to replace docker in these commands to run also. Run the following command in a terminal to checkout the repository and build the image.
@@ -67,14 +73,14 @@ Below is an explanation on what the above commands mean so one will know how to
67
73
68
74
* docker run creates and runs a new container from an image. No modification needed here.
69
75
* On Linux, `--device /dev/dri` passes in your GPU from your host computer to the container as is required to enable container access to your GPU to run ComfyUI. On Windows, `--device /dev/dxg` and `-v /usr/lib/wsl:/usr/lib/wsl` are the equivalent commands to do the same thing through WSL2.
70
-
*`-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). Things like Pytorch Cross Attention and BF16 are already turned on by default. Options that may help speed but impact accuracy and stability as a result include `--fp8_e4m3fn-text-enc`, `--fp8_e4m3fn-unet` and `--gpu-only`. Be aware that with the last option, offloading everything to VRAM may not be that great given that Intel Arc DG2 series cards and similar have a limitation of any one allocation being maximum 4GB in size due to hardware limitations as discussed in [here](https://github.com/oneapi-src/oneDNN/issues/1638) and one may need to use various VRAM reduction methods to actually work around this for higher resolution image generation.
76
+
*`-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). Things like PyTorch Cross Attention and BF16 are already turned on by default. Options that may help speed but impact accuracy and stability as a result include `--fp8_e4m3fn-text-enc`, `--fp8_e4m3fn-unet` and `--gpu-only`. Be aware that with the last option, offloading everything to VRAM may not be that great given that Intel Arc DG2 series cards and similar have a limitation of any one allocation being maximum 4GB in size due to hardware limitations as discussed in [here](https://github.com/oneapi-src/oneDNN/issues/1638) and one may need to use various VRAM reduction methods to actually work around this for higher resolution image generation.
71
77
*`-it` will let you launch the container with an interactive command line. This is highly recommended, but not mandatory, since we may need to monitor ComfyUI's output for any status changes or errors which would be made available easily by including this option.
72
78
* `--name comfy-server` assigns a meaningful name (e.g. comfy-server) to the newly created container. This option is useful but not mandatory to reference your container for later uses.
73
79
* `--network=host` allows the container access to your host computer's network which is needed to access ComfyUI without specifying the `--listen` argument on Linux hosts only, not Windows.
74
80
*`-p 8188:8188` specifies the computer network port to pass into the container to expose access to. This needs to be used alongside the `--listen` argument on Windsows. By default, ComfyUI uses port 8188 so inside the container, this port will be forwarded to http://localhost:<host_port> on your host system. This can be changed but is not recommended for most users.
75
81
* On Linux,`--security-opt=label=disable` will disable SELinux blocking access to the Docker socket incase it is configured by the Linux distribution used. It can be left out if you know your distribution doesn't use SELinux.
76
82
* `-v <Directory to mount ComfyUI>:/ComfyUI:Z` specifies a directory on host to be bind-mounted to /ComfyUI directory inside the container. When you launch the container for the first time, you should specify an empty or non-existent directory on your host computer running Docker or podman, replacing `<Directory to mount ComfyUI>`, so that the container can pull the ComfyUI source code into the directory specified. The `:Z` option at the end indicates that the bind mount content is private and unshared between containers at any one time. This does limit flexibility on the image's usage but is necessary to avoid usage issues with your GPU and ComfyUI output of images. If you want to launch another container (e.g. overriding the docker or podman entrypoint) that shares the initialized ComfyUI folder, you should specify the same directory location but again, it can not be launched at the same time.
77
-
*`-v <volume_name>:/deps` specifies a volume managed by Docker or podman (e.g. a volume named as I don't deps), to be mounted as /deps directory inside the container. /deps is configured as the Python virtual environment root directory (see Dockerfile: ENV venv_dir), to store all dynamic Python dependencies (e.g. Python dependency packages needed by ComfyUI or Intel's oneAPI runtime) that are referenced by ComfyUI when it starts. You can mount the deps volume to multiple containers so that those dynamic dependencies would be downloaded and installed only once. This is useful for users who want to run containers with different ComfyUI arguments (e.g. --gpu-only), and for those who actually build local images for experimenting.
83
+
*`-v <volume_name>:/deps` specifies a volume managed by Docker or podman (e.g. a volume named as deps), to be mounted as /deps directory inside the container. /deps is configured as the Python virtual environment root directory (see Dockerfile: ENV venv_dir), to store all dynamic Python dependencies (e.g. Python dependency packages needed by ComfyUI or Intel's oneAPI runtime) that are referenced by ComfyUI when it starts. You can mount the deps volume to multiple containers so that those dynamic dependencies would be downloaded and installed only once. This is useful for users who want to run containers with different ComfyUI arguments (e.g. --gpu-only), and for those who actually build local images for experimenting.
78
84
* The last argument `ipex-arc-comfy:latest` specifies the image, in format of <image_name>:\<tag> to use for creating the container.
79
85
80
86
Afterwards, one should be able to see that everything runs. To stop a container, you can run `docker stop comfy-server` to stop the container. To resume, you should run `docker start -ai comfy-server`.
0 commit comments