Some good ComfyUI nodes for bad reasons.
- ComfyUI Manager → Install from URL →
https://github.com/teepunkt-esspunkt/ComfyUI-SuiteTea.git - Or manually:
git clone https://github.com/teepunkt-esspunkt/ComfyUI-SuiteTea.git
into yourComfyUI/custom_nodes/folder.
A utility node to save VRAM on older GPUs. Many workflows pass images directly from one model to another → this can cause out-of-memory (OOM) errors on the first run. This node saves the image to disk and reloads it, forcing upstream tensors to unload.
- Works as both a detacher and a normal image loader.
- Has a file picker with preview (like the stock Load Image node).
- Cleaner defaults: Teafault.png, output/temp, output/saved.
Inputs
image_in(optional IMAGE tensor) → triggers save→reload.image(picker) → choose/upload an image with preview.temp_folder(defaultoutput/temp)filename(defaultTeafault.png)also_save_perm(BOOLEAN, defaultfalse)perm_folder(defaultoutput/saved)
Output
reloaded_image(BHWC float, shape 1×H×W×3)
Usage
-
To break tensor lineage: Model Output → Tea: Save & Reload Image V2 → Next Node
-
To just load a file: leave image_in unconnected and pick a file.
This Node is created to help run a python script run the same workflow with different models A string-based checkpoint loader to work with external Python batch scripts. Lets you loop the same workflow across multiple models without clicking through the dropdown.
Inputs
ckpt_path(STRING, full path to .safetensors or .ckpt)
Usage (model loop workflow)
- Create a suiteTea_local.json in suitetea/scripts/ with your private model folder path:
{ "MODELS_DIR": "C:/your/full/path/to/checkpoints" } - Run discover_models_flat.py → generates models_list.txt.
- Build a workflow modelloop.json using Tea: CheckpointLoader instead of the dropdown loader.
- Run run_all_models.py → will iterate through all models in models_list.txt using the same workflow.
Extract a single frame from any video and output it as an IMAGE tensor. Useful for extending clips from the last frame, grabbing a reference still, or snapshotting a timestamp—while keeping VRAM usage low. Optionally saves the frame as a PNG to reuse in later chains.
Inputs
video(picker) → choose a videomode(first|last|index|time)video_path(STRING, optional override; if set, this path is used instead of the picker)frame_index(INT, used whenmode=index)time_Sec(FLOAT, used whenmode=time)max_side(INT, 0 = no resize; otherwise downscales keeping aspect, e.g. 1024)save_png(STRING, if empty, auto-names tooutput/tea_frames/<video>_<tag>.png)overwrite(BOOLEAN)
Outputs
image(BHCW float, shape 1xHxWx3)saved_path(STRING; empty ifsave_png=false)picked_index(INT; returns the frame index inindexmode, otherwise-1)
Usage
- Extend a clip from its last frame:
Tea: Load Frame From Vid As Img (mode=last)→ (optional) VAE Encode → your i2v/video pipeline - Grab a specific moment:
mode=time, settime_sec=2.5to pick the frame at 2.5s ormode=index, setframe_index=123 - Persist the still:
Toggle
save_png=true(and/or setsave_path) to store a reusable PNG and break tensor lineage.
Notes
- Prefers ffmpeg (fast, robust). If ffmpeg isn’t on PATH, it falls back to OpenCV if installed.
- Works with common formats (mp4, mov, webm, mkv, …) as supported by ffmpeg.
- The node returns a CPU tensor; VRAM is only touched when you feed it into a VAE/model.
- If both
videoandvideo_pathare provided,video_pathtakes precedence.
Located in suitetea/scripts/ — helper utilities for batch workflows:
-
discover_models_flat.py: Scans your private models folder (from suiteTea_local.json) and writes models_list.txt. Run this whenever you add/remove checkpoints. -
run_all_models.pyReads models_list.txt and your exported workflow (modelloop.json). Runs the workflow once for each model, saving results into a timestamped folder with the model name as filename prefix.
(More nodes will be added here as SuiteTea grows.)
MIT
