You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 19, 2025. It is now read-only.
Fix Intel apt packaging, and update compute runtime and associated packages to correct version. Also fix startup.sh not having the correct install command and fix documentation.
# Getting the latest versions of Intel's Compute Runtime and associated packages on Github and installing it will update everything we installed before.
99
+
RUN mkdir neo
100
+
WORKDIR /neo
101
+
RUN wget --progress=dot:giga https://github.com/intel/intel-graphics-compiler/releases/download/igc-1.0.16695.4/intel-igc-core_1.0.16695.4_amd64.deb && \
Copy file name to clipboardExpand all lines: README.md
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs
2
2
3
-
This Docker/OCI image is designed to run [ComfyUI](https://github.com/comfyanonymous/ComfyUI) inside a Docker container for Intel Arc GPUs. This work was based in large part on the work done by a Docker image made by nuullll [here](https://github.com/Nuullll/ipex-sd-docker-for-arc-gpu) for a different Stable Diffusion UI.
3
+
This Docker/OCI image is designed to run [ComfyUI](https://github.com/comfyanonymous/ComfyUI) inside a Docker/OCI container for Intel Arc GPUs. This work was based in large part on the work done by a Docker image made by nuullll [here](https://github.com/Nuullll/ipex-sd-docker-for-arc-gpu) for a different Stable Diffusion UI and the official Docker images from the Intel® Extension for PyTorch* xpu-main branch Docker images [here](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main/docker).
4
4
5
5
The Docker/OCI image includes
6
6
- Intel oneAPI DPC++ runtime libs _(Note: compiler executables are not included)_
@@ -22,23 +22,20 @@ There are reports that Intel® Xe GPUs (iGPU and dGPU) in Tiger Lake (11th gener
22
22
* Docker (Desktop) or podman
23
23
* Linux or Windows, with the latest drivers installed.
24
24
25
-
Windows should work, but it is highly not recommended to run this unless you have a specific reason to do so i.e. needing a Linux host/userspace to run custom nodes or etc. For most purposes, doing a native install will give better speeds and less headaches. You can find instructions I have written for doing that with ComfyUI via [this link](https://github.com/comfyanonymous/ComfyUI/discussions/476#discussioncomment-7152963).
26
-
25
+
Windows should work, but it is highly not recommended to run this unless you have a specific reason to do so i.e. needing a Linux host/userspace to run custom nodes or etc. For most purposes, doing a native install will give better speeds and less headaches. Please follow the install instructions listed in the [ComfyUI README.md](https://github.com/comfyanonymous/ComfyUI/?tab=readme-ov-file#intel-gpus)
27
26
* If using Windows, you must have WSL2 set up via [this link](https://learn.microsoft.com/en-us/windows/wsl/install) in addition to Docker to be able to pass through your GPU.
28
27
29
28
## Build and run the image
30
29
31
30
Instructions will assume Docker but podman has command compatibility so it should be easy to replace docker in these commands to run also. Run the following command in a terminal to checkout the repository and build the image.
#### Temporary workaround with Intel Extension for Pytorch
38
-
Go to the releases page and download the Python 3.11 packages versions for intel_extension_for_pytorch, intel_extension_for_pytorch_deepspeed, torch, torchaudio, and torchvision and put the package files in your ComfyUI directory. This will replace installation of these packages from Intel's repositories until a new version of IPEX releases. The startup script will pick them up and install them the first time you start the container.
39
36
40
37
Once the image build is complete, then run the following if using Linux in terminal or Docker Desktop.
41
-
```console
38
+
```sh
42
39
docker run -it `
43
40
--device /dev/dri `
44
41
-e ComfyArgs="<ComfyUI command line arguments>"`
@@ -52,7 +49,7 @@ docker run -it `
52
49
ipex-arc-comfy:latest
53
50
```
54
51
For Windows, run the following in terminal or Docker Desktop.
55
-
```console
52
+
```sh
56
53
docker run -it `
57
54
--device /dev/dxg `
58
55
-e ComfyArgs="<ComfyUI command line arguments>"`
@@ -70,7 +67,7 @@ Below is an explanation on what the above commands mean so one will know how to
70
67
71
68
* docker run creates and runs a new container from an image. No modification needed here.
72
69
* On Linux, `--device /dev/dri` passes in your GPU from your host computer to the container as is required to enable container access to your GPU to run ComfyUI. On Windows, `--device /dev/dxg` and `-v /usr/lib/wsl:/usr/lib/wsl` are the equivalent commands to do the same thing through WSL2.
73
-
*`-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). As of the time of this writing, you may need to specify `--highvram`.`--highvram` keeps the model in GPU memory which can stop a source of crashing.
70
+
*`-e ComfyArgs="<ComfyUI command line arguments>"` specifies the ComfyUI arguments that you can pass to ComfyUI to use. You can take a look at the options you can pass [here](https://github.com/comfyanonymous/ComfyUI/blob/21a563d385ff520e1f7fdaada722212b35fb8d95/comfy/cli_args.py#L36). Things like Pytorch Cross Attention and BF16 are already turned on by default. Options that may help speed but impact accuracy and stability as a result include `--fp8_e4m3fn-text-enc`,`--fp8_e4m3fn-unet` and `--gpu-only`. Be aware that with the last option, offloading everything to VRAM may not be that great given that Intel Arc DG2 series cards and similar have a limitation of any one allocation being maximum 4GB in size due to hardware limitations as discussed in [here](https://github.com/oneapi-src/oneDNN/issues/1638) and one may need to use various VRAM reduction methods to actually work around this for higher resolution image generation.
74
71
*`-it` will let you launch the container with an interactive command line. This is highly recommended, but not mandatory, since we may need to monitor ComfyUI's output for any status changes or errors which would be made available easily by including this option.
75
72
* `--name comfy-server` assigns a meaningful name (e.g. comfy-server) to the newly created container. This option is useful but not mandatory to reference your container for later uses.
76
73
* `--network=host` allows the container access to your host computer's network which is needed to access ComfyUI without specifying the `--listen` argument on Linux hosts only, not Windows.
@@ -89,7 +86,7 @@ docker:
89
86
base_path: /
90
87
...
91
88
```
92
-
*`ipexrun` is a launcher script to use Intel's Extension For Pytorch without code changes with optimizations enabled. There may be issues running ComfyUI through the launcher with some of the arguments you can use so it is not enabled by default. To use the XPU path that uses your GPU, add in `-e UseIPEXRUN=true` to the argument string above. Additionally, if one wants to run it in CPU mode, you should additionally add in `-e UseXPU=false` to that list. You should also then set the environment variable for passing arguments to `ipexrun` adding `-e IPEXRUNArgs=<Your arguments here>`. A reference to all the `ipexrun` arguments can be found [here](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/performance_tuning/launch_script.html)
89
+
* `ipexrun` is a launcher script to use Intel's Extension For Pytorch without code changes with optimizations enabled. GPU is still not supported and running ComfyUI through the launcher with some of the arguments you can use will be unsupported by Intel themselves so it is not enabled by default. To use the XPU path that uses your GPU, add in`-e UseIPEXRUN=true` to the argument string above. If CPU mode is to be used, you should additionally add in`-e UseXPU=false` to that list. You should also thenset the environment variable for passing arguments to `ipexrun` adding `-e IPEXRUNArgs=<Your arguments here>`. A reference to all the `ipexrun` arguments can be found [here](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/performance_tuning/launch_script.html)
93
90
* You can change between `tcmalloc` (default) and `jemalloc`if using CPU `ipexrun`, add in`--build-arg="ALLOCATOR=jemalloc"` when building the image in the first step to switch between the two allocators for`ipexrun`.
94
91
95
-
Please refer to the [Dockerfile](./Dockerfile) for all available build arguments and environment variables not mentioned here and documented.
92
+
Please refer to the [Dockerfile](./Dockerfile) for all available build arguments and environment variables not mentioned here and documented.
0 commit comments