Skip to content

Commit d94fc1b

Browse files
authored
Fix issues of attention.core_attention.softmax_offset is None (#330)
Signed-off-by: Yue <[email protected]>
1 parent 682bf6d commit d94fc1b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

modelopt/torch/export/plugins/megatron_importer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -512,7 +512,7 @@ def _import_state_dict(self):
512512
self.rules["k_layernorm"](attention.k_layernorm, layer_id)
513513
self.rules["linear_qkv"](attention.linear_qkv, layer_id)
514514
self.rules["linear_proj"](attention.linear_proj, layer_id)
515-
if hasattr(attention.core_attention, "softmax_offset"):
515+
if getattr(attention.core_attention, "softmax_offset", None) is not None:
516516
self.rules["softmax_offset"](
517517
attention.core_attention.softmax_offset, layer_id
518518
)

0 commit comments

Comments
 (0)