Skip to content

Commit 9ee2e7f

Browse files
RyanJDickpsychedelicious
authored andcommitted
Do not override log_memory_usage when debug logs are enabled. The speed cost of log_memory_usage=True is large. It is common to want debug log without enabling log_memory_usage.
1 parent 149ff75 commit 9ee2e7f

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

invokeai/backend/model_manager/load/model_cache/model_cache_default.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@
1919
"""
2020

2121
import gc
22-
import logging
2322
import math
2423
import sys
2524
import time
@@ -92,8 +91,7 @@ def __init__(
9291
self._execution_device: torch.device = execution_device
9392
self._storage_device: torch.device = storage_device
9493
self._logger = logger or InvokeAILogger.get_logger(self.__class__.__name__)
95-
self._log_memory_usage = log_memory_usage or self._logger.level == logging.DEBUG
96-
# used for stats collection
94+
self._log_memory_usage = log_memory_usage
9795
self._stats: Optional[CacheStats] = None
9896

9997
self._cached_models: Dict[str, CacheRecord[AnyModel]] = {}

0 commit comments

Comments
 (0)