You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Ctrl + Enter | Queue up current graph for generation |
45
47
| Ctrl + Shift + Enter | Queue up current graph as first for generation |
48
+
| Ctrl + Z/Ctrl + Y | Undo/Redo |
46
49
| Ctrl + S | Save workflow |
47
50
| Ctrl + O | Load workflow |
48
51
| Ctrl + A | Select all nodes |
52
+
| Alt + C | Collapse/uncollapse selected nodes |
49
53
| Ctrl + M | Mute/unmute selected nodes |
50
54
| Ctrl + B | Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) |
51
55
| Delete/Backspace | Delete selected nodes |
@@ -69,7 +73,7 @@ Ctrl can also be replaced with Cmd instead for macOS users
69
73
70
74
There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the [releases page](https://github.com/comfyanonymous/ComfyUI/releases).
71
75
72
-
### [Direct link to download](https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_cu118_or_cpu.7z)
76
+
### [Direct link to download](https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_windows_portable_nvidia_cu121_or_cpu.7z)
73
77
74
78
Simply download, extract with [7-Zip](https://7-zip.org) and run. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints
75
79
@@ -89,19 +93,22 @@ Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints
89
93
90
94
Put your VAE in: models/vae
91
95
96
+
Note: pytorch does not support python 3.12 yet so make sure your python version is 3.11 or earlier.
97
+
92
98
### AMD GPUs (Linux only)
93
99
AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
@@ -187,7 +194,7 @@ To use a textual inversion concepts/embeddings in a text prompt put them in the
187
194
188
195
Make sure you use the regular loaders/Load Checkpoint node to load checkpoints. It will auto pick the right settings depending on your GPU.
189
196
190
-
You can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers this option does not do anything.
197
+
You can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers or pytorch attention this option does not do anything.
0 commit comments