Skip to content

Commit 5b2ccad

Browse files
authored
Fix ZeRO stage to choose BF16 optimizer in test (deepspeedai#7803)
Use ZeRO stage 1 to use BF16 optimizer. (We should have switched to ZeRO1 in deepspeedai#7788, but I missed the change. @sfc-gh-truwase) - deepspeedai#7790 removed the fallback that allowed bf16 model + fp32 grad accumulation without ZeRO, so that combo now raises NotImplementedError. - deepspeedai#7788 changed test_bf16_optimizer_fragments to force BF16_Optimizer by setting grad_accum_dtype=fp32, but it kept ZeRO stage 0, which is now invalid after deepspeedai#7790. Signed-off-by: Masahiro Tanaka <mtanaka@anyscale.com>
1 parent 15ad92b commit 5b2ccad

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

tests/unit/runtime/zero/test_zero_tensor_fragment.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ def test_bf16_optimizer_fragments(self, frozen_weights):
179179
"grad_accum_dtype": "fp32"
180180
},
181181
"zero_optimization": {
182-
"stage": 0,
182+
"stage": 1,
183183
}
184184
}
185185

0 commit comments

Comments
 (0)