refine(offload): refine model_unload behavior #14
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
基于 refine_offload 分支。对
comfy/model_management.py的model_unload函数进行一些修改。首先是当前实现中可能出现
None / int的问题,应将检查并修改memory_to_free提前同时我认为
partially_unload的设置不够精细。比如当 OOM 发生时会将memory_to_free设为一个极大值,则所有已载入模型均会使用 partially_unload 方式进行 offload。取min(memory_to_free, 当前模型载入大小)相对更精细一些。以及当前实现直接根据
partially_unload返回,但当前部分卸载的逻辑应可能存在完整卸载模型的情况,检查模型是否被完整卸载再返回,更符合原函数的行为。