Skip to content

Commit 95c0dd4

Browse files
authored
[LLM] fix llama precision on custom devices (PaddlePaddle#7895)
1 parent 1f82403 commit 95c0dd4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

paddlenlp/transformers/llama/modeling.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1489,7 +1489,7 @@ def forward(self, prediction_scores, masked_lm_labels):
14891489
_hcg = fleet.get_hybrid_communicate_group()
14901490
masked_lm_loss = ConcatSePMaskedLoss.apply(masked_lm_loss, axis=1, group=_hcg.get_sep_parallel_group())
14911491
# skip ignore_index which loss == 0
1492-
masked_lm_loss = masked_lm_loss[masked_lm_loss > 0].astype("float32")
1492+
masked_lm_loss = masked_lm_loss[masked_lm_loss > 0]
14931493
loss = paddle.mean(masked_lm_loss)
14941494

14951495
return loss

0 commit comments

Comments
 (0)