Skip to content

Commit 057edec

Browse files
authored
fix (skip) cache flush when original device is cpu and offloaded to disk meta (#3796)
1 parent 1438331 commit 057edec

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

src/accelerate/utils/modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -404,7 +404,7 @@ def set_module_tensor_to_device(
404404
module.weight = module.weight.cuda(device_index)
405405

406406
# clean pre and post forward hook
407-
if clear_cache and device != "cpu":
407+
if clear_cache and device not in ("cpu", "meta"):
408408
clear_device_cache()
409409

410410
# When handling tied weights, we update tied_params_map to keep track of the tied weights that have already been allocated on the device in

0 commit comments

Comments
 (0)