Skip to content

Conversation

@comfyanonymous
Copy link
Member

No description provided.

@comfyanonymous comfyanonymous merged commit f2b0023 into master Jan 5, 2026
13 checks passed
@comfyanonymous comfyanonymous deleted the ltxv2 branch January 5, 2026 06:59
@GlamoramaAttack
Copy link

Custom-node MagCache fails to load because...

File "D:\ComfyUI\custom_nodes\ComfyUI-MagCache_init_.py", line 1, in
from .nodes import NODE_CLASS_MAPPINGS as NODES_CLASS, NODE_DISPLAY_NAME_MAPPINGS as NODES_DISPLAY
File "D:\ComfyUI\custom_nodes\ComfyUI-MagCache\nodes.py", line 13, in
from comfy.ldm.lightricks.model import precompute_freqs_cis

ImportError: cannot import name 'precompute_freqs_cis' from 'comfy.ldm.lightricks.model' (D:\ComfyUI\comfy\ldm\lightricks\model.py)

Line 13 in nodes.py of ComfyUI-MagCache is...
from comfy.ldm.lightricks.model import precompute_freqs_cis

... I think, whatever change was made to comfy\ldm\lightricks\model.py, it should not affect other custom-nodes that worked prior with the dependence on that model.py file in ComfyUI\comfy\ldm\lightricks without a warning to users of that MAGCache node or or other nodes who may be affected.

@mahmoudimus
Copy link

mahmoudimus commented Jan 6, 2026

I agree @GlamoramaAttack. Here's a bridge in the meantime:

import math
import torch

from comfy.ldm.lightricks.model import (
    generate_freq_grid_np,
    generate_freq_grid_pytorch,
    generate_freqs,
    interleaved_freqs_cis,
)

def precompute_freqs_cis_bridge(
    indices_grid,
    dim,
    out_dtype,
    theta=10000.0,
    max_pos=(20, 2048, 2048),
    *,
    use_middle_indices_grid=False,
    num_attention_heads=None,
):
    """
    Bridge: old `precompute_freqs_cis(...)` behavior, implemented via the new API.

    Returns old-style `pe`:
      [B, 1, N, dim//2, 2, 2]
    """
    device = indices_grid.device
    max_pos_count = len(max_pos)
    n_elem = 2 * max_pos_count
    pad_size = dim % n_elem

    # 1) Generate base freqs grid "indices"
    # CPU can use cached numpy path; GPU uses pytorch path.
    if device.type == "cpu":
        indices = generate_freq_grid_np(theta, max_pos_count, dim)
        indices = indices.to(device=device)
    else:
        indices = generate_freq_grid_pytorch(theta, max_pos_count, dim, device=device)

    # 2) Raw freqs per token (matches the old freqs construction)
    freqs = generate_freqs(
        indices=indices,
        indices_grid=indices_grid,
        max_pos=max_pos,
        use_middle_indices_grid=use_middle_indices_grid,
    )

    # 3) Interleave + pad exactly like the old function did
    cos_i, sin_i = interleaved_freqs_cis(freqs, pad_size=pad_size)

    # Old code collapsed duplicated pairs -> [B, N, dim//2]
    cos_vals = cos_i.reshape(*cos_i.shape[:2], -1, 2)[..., 0].to(out_dtype)
    sin_vals = sin_i.reshape(*sin_i.shape[:2], -1, 2)[..., 0].to(out_dtype)

    # 4) Build [[cos, -sin],[sin, cos]] and add head dim (1) like before
    pe = torch.stack(
        [
            torch.stack([cos_vals, -sin_vals], dim=-1),
            torch.stack([sin_vals, cos_vals], dim=-1),
        ],
        dim=-2,
    ).unsqueeze(
        1
    )  # [B, 1, N, dim//2, 2, 2]

    return pe

@GlamoramaAttack
Copy link

Here's a bridge in the meantime:

return pe

And after "return pe"? I don't have a clue how edit or write a Python script. My guess is it continues here:

import comfy.ldm.common_dit
import comfy.model_management as mm
import numpy as np

from torch import Tensor
from einops import repeat
from typing import Optional
from unittest.mock import patc

? But line 13 in the original nodes.py:

from comfy.ldm.lightricks.model import precompute_freqs_cis

also needs to be deleted? If not there is an error - but after erasing the line another error appears:

File "D:\ComfyUI\custom_nodes\ComfyUI-MagCache_init_.py", line 2, in
from .nodes_calibration import NODE_CLASS_MAPPINGS as Cal_NODES_CLASS, NODE_DISPLAY_NAME_MAPPINGS as Cal_NODES_DISPLAY
File "D:\ComfyUI\custom_nodes\ComfyUI-MagCache\nodes_calibration.py", line 13, in
from comfy.ldm.lightricks.model import precompute_freqs_cis
ImportError: cannot import name 'precompute_freqs_cis' from 'comfy.ldm.lightricks.model' (D:\ComfyUI\comfy\ldm\lightricks\model.py)

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-MagCache module for custom nodes: cannot import name 'precompute_freqs_cis' from 'comfy.ldm.lightricks.model' (D:\ComfyUI\comfy\ldm\lightricks\model.py)

So, just another guess: the nodes_calibration.py- file needs the same change as nodes.py? I did that: no more error message about it and the MagCache node loads.

@zwukong
Copy link

zwukong commented Jan 6, 2026

workflow please😄。Looks like we don't need a custom node as https://github.com/Lightricks/ComfyUI-LTXVideo

strint added a commit to siliconflow/ComfyUI that referenced this pull request Jan 6, 2026
* Update workflow templates to v0.7.62 (Comfy-Org#11467)

* Make denoised output on custom sampler nodes work with nested tensors. (Comfy-Org#11471)

* api-nodes: use new custom endpoint for Nano Banana (Comfy-Org#11311)

* chore: update workflow templates to v0.7.63 (Comfy-Org#11482)

* ComfyUI v0.6.0

* Bump comfyui-frontend-package to 1.35.9 (Comfy-Org#11470)

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>

* chore: update workflow templates to v0.7.64 (Comfy-Org#11496)

* Add a ManualSigmas node. (Comfy-Org#11499)

Can be used to manually set the sigmas for a model.

This node accepts a list of integer and floating point numbers separated
with any non numeric character.

* Specify in readme that we only support pytorch 2.4 and up. (Comfy-Org#11512)

* bump comfyui_manager version to the 4.0.4 (Comfy-Org#11521)

* Fix noise with ancestral samplers when inferencing on cpu. (Comfy-Org#11528)

* feat(api-nodes): add Kling Motion Control node (Comfy-Org#11493)

* [V3] converted nodes_images.py to V3 schema (Comfy-Org#11206)

* converted nodes_images.py to V3 schema

* fix test

* fix(api-nodes-gemini): always force enhance_prompt to be True (Comfy-Org#11503)

* chore(api-nodes): switch to credits instead of $ (Comfy-Org#11489)

* Enable async offload by default for AMD. (Comfy-Org#11534)

* Comment out unused norm_final in lumina/z image model. (Comfy-Org#11545)

* mm: discard async errors from pinning failures (Comfy-Org#10738)

Pretty much every error cudaHostRegister can throw also queues the same
error on the async GPU queue. This was fixed for repinning error case,
but there is the bad mmap and just enomem cases that are harder to
detect.

Do some dummy GPU work to clean the error state.

* Add some warnings for pin and unpin errors. (Comfy-Org#11561)

* ResizeByLongerSide: support video (Comfy-Org#11555)

(cherry picked from commit 98c6840aa4e5fd5407ba9ab113d209011e474bf6)

* chore(api-nodes-bytedance): mark "seededit" as deprecated, adjust display name of Seedream (Comfy-Org#11490)

* Add handling for vace_context in context windows (Comfy-Org#11386)

Co-authored-by: ozbayb <[email protected]>

* ComfyUI version v0.7.0

* Add support for sage attention 3 in comfyui, enable via new cli arg (Comfy-Org#11026)

* Add support for sage attention 3 in comfyui, enable via new cli arg
--use-sage-attiention3

* Fix some bugs found in PR review. The N dimension at which Sage
Attention 3 takes effect is reduced to 1024 (although the improvement is
not significant at this scale).

* Remove the Sage Attention3 switch, but retain the attention function
registration.

* Fix a ruff check issue in attention.py

* V3 Improvements + DynamicCombo + Autogrow exposed in public API (Comfy-Org#11345)

* Support Combo outputs in a more sane way

* Remove test validate_inputs function on test node

* Make curr_prefix be a list of strings instead of string for easier parsing as keys get added to dynamic types

* Start to account for id prefixes from frontend, need to fix bug with nested dynamics

* Ensure inputs/outputs/hidden are lists in schema finalize function, remove no longer needed 'is not None' checks

* Add raw_link and extra_dict to all relevant Inputs

* Make nested DynamicCombos work properly with prefixed keys on latest frontend; breaks old Autogrow, but is pretty much ready for upcoming Autogrow keys

* Replace ... usage with a MISSING sentinel for clarity in nodes_logic.py

* Added CustomCombo node in backend to reflect frontend node

* Prepare Autogrow's expand_schema_for_dynamic to work with upcoming frontend changes

* Prepare for look up table for dynamic input stuff

* More progress towards dynamic input lookup function stuff

* Finished converting _expand_schema_for_dynamic to be done via lookup instead of OOP to guarantee working with process isolation, did refactoring to remove old implementation + cleaning INPUT_TYPES definition including v3 hidden definition

* Change order of functions

* Removed some unneeded functions after dynamic refactor

* Make MatchType's output default displayname "MATCHTYPE"

* Fix DynamicSlot get_all

* Removed redundant code - dynamic stuff no longer happens in OOP way

* Natively support AnyType (*) without __ne__ hacks

* Remove stray code that made it in

* Remove expand_schema_for_dynamic left over on DynamicInput class

* get_dynamic() on DynamicInput/Output was not doing anything anymore, so removed it

* Make validate_inputs validate combo input correctly

* Temporarily comment out conversion to 'new' (9 month old) COMBO format in get_input_info

* Remove refrences to resources feature scrapped from V3

* Expose DynamicCombo in public API

* satisfy ruff after some code got commented out

* Make missing input error prettier for dynamic types

* Created a Switch2 node as a side-by-side test, will likely go with Switch2 as the initial switch node

* Figured out Switch situation

* Pass in v3_data in IsChangedCache.get function's fingerprint_inputs, add a from_v3_data helper method to HiddenHolder

* Switch order of Switch and Soft Switch nodes in file

* Temp test node for MatchType

* Fix missing v3_data for v1 nodes in validation

* For now, remove chacking duplicate id's for dynamic types

* Add Resize Image/Mask node that thanks to MatchType+DynamicCombo is 16-nodes-in-1

* Made DynamicCombo references in DCTestNode use public interface

* Add an AnyTypeTestNode

* Make lazy status for specific inputs on DynamicInputs work by having the values of the dictionary for check_lazy_status be a tuple, where the second element is the key of the input that can be returned

* Comment out test logic nodes

* Make primitive float's step make more sense

* Add (and leave commented out) some potential logic nodes

* Change default crop option to "center" on Resize Image/Mask node

* Changed copy.copy(d) to d.copy()

* Autogrow is available in stable  frontend, so exposing it in public API

* Use outputs id as display_name if no display_name present, remove v3 outputs id restriction that made them have to have unique IDs from the inputs

* Enable Custom Combo node as stable frontend now supports it

* Make id properly act like display_name on outputs

* Add Batch Images/Masks/Latents node

* Comment out Batch Images/Masks/Latents node for now, as Autogrow has a bug with MatchType where top connection is disconnected upon refresh

* Removed code for a couple test nodes in nodes_logic.py

* Add Batch Images, Batch Masks, and Batch Latents nodes with Autogrow, deprecate old Batch Images + LatentBatch nodes

* fix(api-nodes-vidu): preserve percent-encoding for signed URLs (Comfy-Org#11564)

* chore: update workflow templates to v0.7.65 (Comfy-Org#11579)

* Refactor: move clip_preprocess to comfy.clip_model (Comfy-Org#11586)

* Remove duplicate import of model_management (Comfy-Org#11587)

* New Year ruff cleanup. (Comfy-Org#11595)

* Ignore all frames except the first one for MPO format. (Comfy-Org#11569)

* Give Mahiro CFG a more appropriate display name (Comfy-Org#11580)

* Tripo3D: pass face_limit parameter only when it differs from default (Comfy-Org#11601)

* Remove leftover scaled_fp8 key. (Comfy-Org#11603)

* Print memory summary on OOM to help with debugging. (Comfy-Org#11613)

* feat(api-nodes): add support for 720p resolution for Kling Omni nodes (Comfy-Org#11604)

* Fix case where upscale model wouldn't be moved to cpu. (Comfy-Org#11633)

* Support the LTXV 2 model. (Comfy-Org#11632)

* Add LTXAVTextEncoderLoader node. (Comfy-Org#11634)

* Refactor module_size function. (Comfy-Org#11637)

* Fix name. (Comfy-Org#11638)

---------

Co-authored-by: ComfyUI Wiki <[email protected]>
Co-authored-by: comfyanonymous <[email protected]>
Co-authored-by: Alexander Piskun <[email protected]>
Co-authored-by: comfyanonymous <[email protected]>
Co-authored-by: Comfy Org PR Bot <[email protected]>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Dr.Lt.Data <[email protected]>
Co-authored-by: rattus <[email protected]>
Co-authored-by: Tavi Halperin <[email protected]>
Co-authored-by: drozbay <[email protected]>
Co-authored-by: ozbayb <[email protected]>
Co-authored-by: mengqin <[email protected]>
Co-authored-by: Jedrzej Kosinski <[email protected]>
Co-authored-by: throttlekitty <[email protected]>
lrivera pushed a commit to Research-Warrant/ComfyUI that referenced this pull request Jan 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants