Skip to content

Commit e07a32c

Browse files
authored
Merge branch 'master' into offloader-maifee
2 parents a19f0a8 + 2abd2b5 commit e07a32c

File tree

116 files changed

+11874
-9698
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

116 files changed

+11874
-9698
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
..\python_embeded\python.exe -s ..\ComfyUI\main.py --windows-standalone-build --disable-api-nodes
2+
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest.
3+
pause
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build
2+
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest.
23
pause
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast fp16_accumulation
2+
echo If you see this and ComfyUI did not start try updating your Nvidia Drivers to the latest.
23
pause

.github/ISSUE_TEMPLATE/bug-report.yml

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,15 @@ body:
88
Before submitting a **Bug Report**, please ensure the following:
99
1010
- **1:** You are running the latest version of ComfyUI.
11-
- **2:** You have looked at the existing bug reports and made sure this isn't already reported.
11+
- **2:** You have your ComfyUI logs and relevant workflow on hand and will post them in this bug report.
1212
- **3:** You confirmed that the bug is not caused by a custom node. You can disable all custom nodes by passing
13-
`--disable-all-custom-nodes` command line argument.
13+
`--disable-all-custom-nodes` command line argument. If you have custom node try updating them to the latest version.
1414
- **4:** This is an actual bug in ComfyUI, not just a support question. A bug is when you can specify exact
1515
steps to replicate what went wrong and others will be able to repeat your steps and see the same issue happen.
1616
17-
If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
17+
## Very Important
18+
19+
Please make sure that you post ALL your ComfyUI logs in the bug report. A bug report without logs will likely be ignored.
1820
- type: checkboxes
1921
id: custom-nodes-test
2022
attributes:

.github/workflows/release-stable-all.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ jobs:
1818
uses: ./.github/workflows/stable-release.yml
1919
with:
2020
git_tag: ${{ inputs.git_tag }}
21-
cache_tag: "cu129"
21+
cache_tag: "cu130"
2222
python_minor: "13"
23-
python_patch: "6"
23+
python_patch: "9"
2424
rel_name: "nvidia"
2525
rel_extra_name: ""
2626
test_release: true

.github/workflows/windows_release_dependencies.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ on:
1717
description: 'cuda version'
1818
required: true
1919
type: string
20-
default: "129"
20+
default: "130"
2121

2222
python_minor:
2323
description: 'python minor version'
@@ -29,7 +29,7 @@ on:
2929
description: 'python patch version'
3030
required: true
3131
type: string
32-
default: "6"
32+
default: "9"
3333
# push:
3434
# branches:
3535
# - master

README.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -112,10 +112,11 @@ Workflow examples can be found on the [Examples page](https://comfyanonymous.git
112112

113113
## Release Process
114114

115-
ComfyUI follows a weekly release cycle targeting Friday but this regularly changes because of model releases or large changes to the codebase. There are three interconnected repositories:
115+
ComfyUI follows a weekly release cycle targeting Monday but this regularly changes because of model releases or large changes to the codebase. There are three interconnected repositories:
116116

117117
1. **[ComfyUI Core](https://github.com/comfyanonymous/ComfyUI)**
118-
- Releases a new stable version (e.g., v0.7.0)
118+
- Releases a new stable version (e.g., v0.7.0) roughly every week.
119+
- Commits outside of the stable release tags may be very unstable and break many custom nodes.
119120
- Serves as the foundation for the desktop release
120121

121122
2. **[ComfyUI Desktop](https://github.com/Comfy-Org/desktop)**
@@ -176,6 +177,8 @@ Simply download, extract with [7-Zip](https://7-zip.org) and run. Make sure you
176177

177178
If you have trouble extracting it, right click the file -> properties -> unblock
178179

180+
Update your Nvidia drivers if it doesn't start.
181+
179182
#### Alternative Downloads:
180183

181184
[Experimental portable for AMD GPUs](https://github.com/comfyanonymous/ComfyUI/releases/latest/download/ComfyUI_windows_portable_amd.7z)
@@ -197,7 +200,11 @@ comfy install
197200

198201
## Manual Install (Windows, Linux)
199202

200-
Python 3.13 is very well supported. If you have trouble with some custom node dependencies you can try 3.12
203+
Python 3.14 will work if you comment out the `kornia` dependency in the requirements.txt file (breaks the canny node) but it is not recommended.
204+
205+
Python 3.13 is very well supported. If you have trouble with some custom node dependencies on 3.13 you can try 3.12
206+
207+
### Instructions:
201208

202209
Git clone this repo.
203210

@@ -253,7 +260,7 @@ This is the command to install the Pytorch xpu nightly which might have some per
253260

254261
Nvidia users should install stable pytorch using this command:
255262

256-
```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu129```
263+
```pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130```
257264

258265
This is the command to install pytorch nightly instead which might have performance improvements.
259266

app/subgraph_manager.py

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
from __future__ import annotations
2+
3+
from typing import TypedDict
4+
import os
5+
import folder_paths
6+
import glob
7+
from aiohttp import web
8+
import hashlib
9+
10+
11+
class Source:
12+
custom_node = "custom_node"
13+
14+
class SubgraphEntry(TypedDict):
15+
source: str
16+
"""
17+
Source of subgraph - custom_nodes vs templates.
18+
"""
19+
path: str
20+
"""
21+
Relative path of the subgraph file.
22+
For custom nodes, will be the relative directory like <custom_node_dir>/subgraphs/<name>.json
23+
"""
24+
name: str
25+
"""
26+
Name of subgraph file.
27+
"""
28+
info: CustomNodeSubgraphEntryInfo
29+
"""
30+
Additional info about subgraph; in the case of custom_nodes, will contain nodepack name
31+
"""
32+
data: str
33+
34+
class CustomNodeSubgraphEntryInfo(TypedDict):
35+
node_pack: str
36+
"""Node pack name."""
37+
38+
class SubgraphManager:
39+
def __init__(self):
40+
self.cached_custom_node_subgraphs: dict[SubgraphEntry] | None = None
41+
42+
async def load_entry_data(self, entry: SubgraphEntry):
43+
with open(entry['path'], 'r') as f:
44+
entry['data'] = f.read()
45+
return entry
46+
47+
async def sanitize_entry(self, entry: SubgraphEntry | None, remove_data=False) -> SubgraphEntry | None:
48+
if entry is None:
49+
return None
50+
entry = entry.copy()
51+
entry.pop('path', None)
52+
if remove_data:
53+
entry.pop('data', None)
54+
return entry
55+
56+
async def sanitize_entries(self, entries: dict[str, SubgraphEntry], remove_data=False) -> dict[str, SubgraphEntry]:
57+
entries = entries.copy()
58+
for key in list(entries.keys()):
59+
entries[key] = await self.sanitize_entry(entries[key], remove_data)
60+
return entries
61+
62+
async def get_custom_node_subgraphs(self, loadedModules, force_reload=False):
63+
# if not forced to reload and cached, return cache
64+
if not force_reload and self.cached_custom_node_subgraphs is not None:
65+
return self.cached_custom_node_subgraphs
66+
# Load subgraphs from custom nodes
67+
subfolder = "subgraphs"
68+
subgraphs_dict: dict[SubgraphEntry] = {}
69+
70+
for folder in folder_paths.get_folder_paths("custom_nodes"):
71+
pattern = os.path.join(folder, f"*/{subfolder}/*.json")
72+
matched_files = glob.glob(pattern)
73+
for file in matched_files:
74+
# replace backslashes with forward slashes
75+
file = file.replace('\\', '/')
76+
info: CustomNodeSubgraphEntryInfo = {
77+
"node_pack": "custom_nodes." + file.split('/')[-3]
78+
}
79+
source = Source.custom_node
80+
# hash source + path to make sure id will be as unique as possible, but
81+
# reproducible across backend reloads
82+
id = hashlib.sha256(f"{source}{file}".encode()).hexdigest()
83+
entry: SubgraphEntry = {
84+
"source": Source.custom_node,
85+
"name": os.path.splitext(os.path.basename(file))[0],
86+
"path": file,
87+
"info": info,
88+
}
89+
subgraphs_dict[id] = entry
90+
self.cached_custom_node_subgraphs = subgraphs_dict
91+
return subgraphs_dict
92+
93+
async def get_custom_node_subgraph(self, id: str, loadedModules):
94+
subgraphs = await self.get_custom_node_subgraphs(loadedModules)
95+
entry: SubgraphEntry = subgraphs.get(id, None)
96+
if entry is not None and entry.get('data', None) is None:
97+
await self.load_entry_data(entry)
98+
return entry
99+
100+
def add_routes(self, routes, loadedModules):
101+
@routes.get("/global_subgraphs")
102+
async def get_global_subgraphs(request):
103+
subgraphs_dict = await self.get_custom_node_subgraphs(loadedModules)
104+
# NOTE: we may want to include other sources of global subgraphs such as templates in the future;
105+
# that's the reasoning for the current implementation
106+
return web.json_response(await self.sanitize_entries(subgraphs_dict, remove_data=True))
107+
108+
@routes.get("/global_subgraphs/{id}")
109+
async def get_global_subgraph(request):
110+
id = request.match_info.get("id", None)
111+
subgraph = await self.get_custom_node_subgraph(id, loadedModules)
112+
return web.json_response(await self.sanitize_entry(subgraph))

comfy/cli_args.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,6 +105,7 @@ class LatentPreviewMethod(enum.Enum):
105105
cache_group.add_argument("--cache-classic", action="store_true", help="Use the old style (aggressive) caching.")
106106
cache_group.add_argument("--cache-lru", type=int, default=0, help="Use LRU caching with a maximum of N node results cached. May use more RAM/VRAM.")
107107
cache_group.add_argument("--cache-none", action="store_true", help="Reduced RAM/VRAM usage at the expense of executing every node for each run.")
108+
cache_group.add_argument("--cache-ram", nargs='?', const=4.0, type=float, default=0, help="Use RAM pressure caching with the specified headroom threshold. If available RAM drops below the threhold the cache remove large items to free RAM. Default 4GB")
108109

109110
attn_group = parser.add_mutually_exclusive_group()
110111
attn_group.add_argument("--use-split-cross-attention", action="store_true", help="Use the split cross attention optimization. Ignored when xformers is used.")
@@ -156,7 +157,9 @@ class PerformanceFeature(enum.Enum):
156157
CublasOps = "cublas_ops"
157158
AutoTune = "autotune"
158159

159-
parser.add_argument("--fast", nargs="*", type=PerformanceFeature, help="Enable some untested and potentially quality deteriorating optimizations. --fast with no arguments enables everything. You can pass a list specific optimizations if you only want to enable specific ones. Current valid optimizations: {}".format(" ".join(map(lambda c: c.value, PerformanceFeature))))
160+
parser.add_argument("--fast", nargs="*", type=PerformanceFeature, help="Enable some untested and potentially quality deteriorating optimizations. This is used to test new features so using it might crash your comfyui. --fast with no arguments enables everything. You can pass a list specific optimizations if you only want to enable specific ones. Current valid optimizations: {}".format(" ".join(map(lambda c: c.value, PerformanceFeature))))
161+
162+
parser.add_argument("--disable-pinned-memory", action="store_true", help="Disable pinned memory use.")
160163

161164
parser.add_argument("--mmap-torch-files", action="store_true", help="Use mmap when loading ckpt/pt files.")
162165
parser.add_argument("--disable-mmap", action="store_true", help="Don't use mmap when loading safetensors.")

comfy/controlnet.py

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -310,11 +310,13 @@ def __init__(self, in_features: int, out_features: int, bias: bool = True,
310310
self.bias = None
311311

312312
def forward(self, input):
313-
weight, bias = comfy.ops.cast_bias_weight(self, input)
313+
weight, bias, offload_stream = comfy.ops.cast_bias_weight(self, input, offloadable=True)
314314
if self.up is not None:
315-
return torch.nn.functional.linear(input, weight + (torch.mm(self.up.flatten(start_dim=1), self.down.flatten(start_dim=1))).reshape(self.weight.shape).type(input.dtype), bias)
315+
x = torch.nn.functional.linear(input, weight + (torch.mm(self.up.flatten(start_dim=1), self.down.flatten(start_dim=1))).reshape(self.weight.shape).type(input.dtype), bias)
316316
else:
317-
return torch.nn.functional.linear(input, weight, bias)
317+
x = torch.nn.functional.linear(input, weight, bias)
318+
comfy.ops.uncast_bias_weight(self, weight, bias, offload_stream)
319+
return x
318320

319321
class Conv2d(torch.nn.Module, comfy.ops.CastWeightBiasOp):
320322
def __init__(
@@ -350,12 +352,13 @@ def __init__(
350352

351353

352354
def forward(self, input):
353-
weight, bias = comfy.ops.cast_bias_weight(self, input)
355+
weight, bias, offload_stream = comfy.ops.cast_bias_weight(self, input, offloadable=True)
354356
if self.up is not None:
355-
return torch.nn.functional.conv2d(input, weight + (torch.mm(self.up.flatten(start_dim=1), self.down.flatten(start_dim=1))).reshape(self.weight.shape).type(input.dtype), bias, self.stride, self.padding, self.dilation, self.groups)
357+
x = torch.nn.functional.conv2d(input, weight + (torch.mm(self.up.flatten(start_dim=1), self.down.flatten(start_dim=1))).reshape(self.weight.shape).type(input.dtype), bias, self.stride, self.padding, self.dilation, self.groups)
356358
else:
357-
return torch.nn.functional.conv2d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)
358-
359+
x = torch.nn.functional.conv2d(input, weight, bias, self.stride, self.padding, self.dilation, self.groups)
360+
comfy.ops.uncast_bias_weight(self, weight, bias, offload_stream)
361+
return x
359362

360363
class ControlLora(ControlNet):
361364
def __init__(self, control_weights, global_average_pooling=False, model_options={}): #TODO? model_options

0 commit comments

Comments
 (0)