[Frontend] ComfyUI video & LoRA support#1596
[Frontend] ComfyUI video & LoRA support#1596fhfuih wants to merge 8 commits intovllm-project:mainfrom
Conversation
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 5d3123d9c8
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
There was a problem hiding this comment.
Pull request overview
Adds ComfyUI frontend support for vLLM-Omni’s new video generation API and per-request LoRA, including new nodes and integration tests to ensure parameters are forwarded correctly.
Changes:
- Add a Generate Video ComfyUI node and video request/response handling in the plugin client/utilities.
- Add a LoRA node (REMOTE_LORA) and plumb LoRA through image + video generation requests.
- Expand ComfyUI integration tests to cover video generation and LoRA parameter passing.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/entrypoints/openai_api/test_image_server.py | Removes debug print from mocked diffusion generation test. |
| tests/comfyui/test_comfyui_integration.py | Adds video node test coverage and LoRA assertions in integration flow. |
| tests/comfyui/conftest.py | Extends ComfyUI mocks to include comfy_api.latest for video construction. |
| docs/features/comfyui.md | Documents the new Generate Video node and updates workflow guidance. |
| apps/ComfyUI-vLLM-Omni/comfyui_vllm_omni/utils/validators.py | Improves logging when model spec is missing. |
| apps/ComfyUI-vLLM-Omni/comfyui_vllm_omni/utils/types.py | Adds WanModelSpecificParams for Wan video model params typing. |
| apps/ComfyUI-vLLM-Omni/comfyui_vllm_omni/utils/format.py | Adds base64_to_video decoder to convert API video outputs into ComfyUI video inputs. |
| apps/ComfyUI-vLLM-Omni/comfyui_vllm_omni/utils/api_client.py | Adds video generation client method and LoRA support for image/video requests. |
| apps/ComfyUI-vLLM-Omni/comfyui_vllm_omni/nodes.py | Adds Generate Video node, Remote LoRA node, and Wan params node; plumbs LoRA into image generation paths. |
| apps/ComfyUI-vLLM-Omni/init.py | Exports the new nodes and display names. |
| apps/ComfyUI-vLLM-Omni/README.md | Documents the new Generate Video node and updates workflow guidance. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
Signed-off-by: Huang, Zeyu <11222265+fhfuih@users.noreply.github.com>
PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.
Purpose
In the ComfyUI frontend, support the recent video generation API (motivation) and the LoRA feature (motivation).
Test Plan
In
tests/comfyui/test_comfyui_integration.py, added the test case for Video Generation node. Also for the test case parametrization that inputs sampling parameters, also include LoRA for both image generation and video generation.Test Result
Passed on my side
Release Note Update
- Support video generation and LoRA in ComfyUI frontendEssential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model. Please runmkdocs serveto sync the documentation editions to./docs.BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)