Skip to content

Commit 238fa99

Browse files
authored
fix varlen op saving for simple fsdp (#2396)
1 parent bc9f3ff commit 238fa99

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

tests/unit_tests/test_activation_checkpoint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
# used to compute the scaling factor for quantization.
2929
torch.ops.aten.max.default,
3030
torch._higher_order_ops.flex_attention,
31-
torch.ops.torch_attn._varlen_attn,
31+
torch.ops.torch_attn._varlen_attn.default,
3232
}
3333

3434

torchtitan/experiments/simple_fsdp/llama3/parallelize.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333
# used to compute the scaling factor for quantization.
3434
torch.ops.aten.max.default,
3535
torch._higher_order_ops.flex_attention,
36-
torch.ops.torch_attn._varlen_attn,
36+
torch.ops.torch_attn._varlen_attn.default,
3737
torch._higher_order_ops.inductor_compiled_code,
3838
}
3939

0 commit comments

Comments
 (0)