Skip to content

Commit c69e6c3

Browse files
authored
cp: [dev] Fix cuda graph scope check in language_model.py (NVIDIA#2158) (NVIDIA#2159)
Signed-off-by: Ananth Subramaniam <ansubramania@nvidia.com>
1 parent 1db5cd3 commit c69e6c3

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

megatron/core/models/common/language_module/language_module.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -138,6 +138,7 @@ def compute_language_model_loss(self, labels: Tensor, logits: Tensor) -> Tensor:
138138
# Use is_cg_capturable=True for full iteration CUDA graphs to avoid torch.equal checks
139139
is_cg_capturable = (
140140
hasattr(self.config, 'cuda_graph_scope')
141+
and self.config.cuda_graph_scope
141142
and 'full_iteration' in self.config.cuda_graph_scope
142143
)
143144
if is_cg_capturable and not is_te_min_version("2.7.0"):

0 commit comments

Comments
 (0)