Skip to content

Commit 26519c0

Browse files
authored
Revise README for llama3-1-70b pretraining
Updated README to reflect specific workload details for llama3-1-70b.
1 parent 062b037 commit 26519c0

File tree

1 file changed

+2
-2
lines changed
  • training/a4/llama3-1-70b/nemo-pretraining-gke/32node-bf16-seq8192-gbs2048/recipe

1 file changed

+2
-2
lines changed

training/a4/llama3-1-70b/nemo-pretraining-gke/32node-bf16-seq8192-gbs2048/recipe/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<!-- mdformat global-off -->
2-
# Pretrain $USER-a4-llama3-1-70b workloads on a4 GKE Node pools with Nvidia NeMo Framework
2+
# Pretrain llama3-1-70b-seq8192-gbs2048-mbs1-gpus256 workloads on a4 GKE Node pools with Nvidia NeMo Framework
33

4-
This recipe outlines the steps for running a $USER-a4-llama3-1-70b pretraining
4+
This recipe outlines the steps for running a llama3-1-70b-seq8192-gbs2048-mbs1-gpus256 pretraining
55
workload on [a4 GKE Node pools](https://cloud.google.com/kubernetes-engine) by using the
66
[NVIDIA NeMo framework](https://github.com/NVIDIA/nemo).
77

0 commit comments

Comments
 (0)