We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents d29c62e + 80a6657 commit 7598e22Copy full SHA for 7598e22
llmtune/config.yml
@@ -59,8 +59,8 @@ training:
59
optim: "paged_adamw_32bit"
60
logging_steps: 1
61
learning_rate: 2.0e-4
62
- bf16: true # Set to true for mixed precision training on Newer GPUs
63
- tf32: true
+ bf16: true # [Ampere+] Set to true for mixed precision training on Newer GPUs
+ tf32: true # [Ampere+] Set to true for mixed precision training on Newer GPUs
64
# fp16: false # Set to true for mixed precision training on Older GPUs
65
max_grad_norm: 0.3
66
warmup_ratio: 0.03
0 commit comments