You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### AMD GPUs (Experimental: Windows and Linux), RDNA 3, 3.5 and 4 only.
221
+
222
+
These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware.
(Option 1) Intel Arc GPU users can install native PyTorch with torch.xpu support using pip. More information can be found [here](https://pytorch.org/docs/main/notes/get_start_xpu.html)
@@ -270,12 +288,6 @@ You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS ve
270
288
271
289
> **Note**: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in [ComfyUI manual installation](#manual-install-windows-linux).
272
290
273
-
#### DirectML (AMD Cards on Windows)
274
-
275
-
This is very badly supported and is not recommended. There are some unofficial builds of pytorch ROCm on windows that exist that will give you a much better experience than this. This readme will be updated once official pytorch ROCm builds for windows come out.
276
-
277
-
```pip install torch-directml``` Then you can launch ComfyUI with: ```python main.py --directml```
278
-
279
291
#### Ascend NPUs
280
292
281
293
For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the [installation](https://ascend.github.io/docs/sources/ascend/quick_install.html) page. Here's a step-by-step guide tailored to your platform and installation method:
0 commit comments