You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Option 1) Intel Arc GPU users can install native PyTorch with torch.xpu support using pip (currently available in PyTorch nightly builds). More information can be found [here](https://pytorch.org/docs/main/notes/get_start_xpu.html)
163
+
164
+
1. To install PyTorch nightly, use the following command:
For other supported Intel GPUs with IPEX, visit [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) for more information.
181
+
182
+
Additional discussion and help can be found [here](https://github.com/comfyanonymous/ComfyUI/discussions/476).
149
183
150
184
### NVIDIA
151
185
@@ -155,7 +189,7 @@ Nvidia users should install stable pytorch using this command:
155
189
156
190
This is the command to install pytorch nightly instead which might have performance improvements:
@@ -175,17 +209,6 @@ After this you should have everything installed and can proceed to running Comfy
175
209
176
210
### Others:
177
211
178
-
#### Intel GPUs
179
-
180
-
Intel GPU support is available for all Intel GPUs supported by Intel's Extension for Pytorch (IPEX) with the support requirements listed in the [Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu) page. Choose your platform and method of install and follow the instructions. The steps are as follows:
181
-
182
-
1. Start by installing the drivers or kernel listed or newer in the Installation page of IPEX linked above for Windows and Linux if needed.
183
-
1. Follow the instructions to install [Intel's oneAPI Basekit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) for your platform.
184
-
1. Install the packages for IPEX using the instructions provided in the Installation page for your platform.
185
-
1. Follow the [ComfyUI manual installation](#manual-install-windows-linux) instructions for Windows and Linux and run ComfyUI normally as described above after everything is installed.
186
-
187
-
Additional discussion and help can be found [here](https://github.com/comfyanonymous/ComfyUI/discussions/476).
188
-
189
212
#### Apple Mac silicon
190
213
191
214
You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
@@ -201,6 +224,16 @@ You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS ve
201
224
202
225
```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```
203
226
227
+
#### Ascend NPUs
228
+
229
+
For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the [installation](https://ascend.github.io/docs/sources/ascend/quick_install.html) page. Here's a step-by-step guide tailored to your platform and installation method:
230
+
231
+
1. Begin by installing the recommended or newer kernel version for Linux as specified in the Installation page of torch-npu, if necessary.
232
+
2. Proceed with the installation of Ascend Basekit, which includes the driver, firmware, and CANN, following the instructions provided for your specific platform.
233
+
3. Next, install the necessary packages for torch-npu by adhering to the platform-specific instructions on the [Installation](https://ascend.github.io/docs/sources/pytorch/install.html#pytorch) page.
234
+
4. Finally, adhere to the [ComfyUI manual installation](#manual-install-windows-linux) guide for Linux. Once all components are installed, you can run ComfyUI as described earlier.
235
+
236
+
204
237
# Running
205
238
206
239
```python main.py```
@@ -213,6 +246,14 @@ For 6700, 6600 and maybe other RDNA2 or older: ```HSA_OVERRIDE_GFX_VERSION=10.3.
213
246
214
247
For AMD 7600 and maybe other RDNA3 cards: ```HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py```
215
248
249
+
### AMD ROCm Tips
250
+
251
+
You can enable experimental memory efficient attention on pytorch 2.5 in ComfyUI on RDNA3 and potentially other AMD GPUs using this command:
0 commit comments