We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 062b037 commit 26519c0Copy full SHA for 26519c0
training/a4/llama3-1-70b/nemo-pretraining-gke/32node-bf16-seq8192-gbs2048/recipe/README.md
@@ -1,7 +1,7 @@
1
<!-- mdformat global-off -->
2
-# Pretrain $USER-a4-llama3-1-70b workloads on a4 GKE Node pools with Nvidia NeMo Framework
+# Pretrain llama3-1-70b-seq8192-gbs2048-mbs1-gpus256 workloads on a4 GKE Node pools with Nvidia NeMo Framework
3
4
-This recipe outlines the steps for running a $USER-a4-llama3-1-70b pretraining
+This recipe outlines the steps for running a llama3-1-70b-seq8192-gbs2048-mbs1-gpus256 pretraining
5
workload on [a4 GKE Node pools](https://cloud.google.com/kubernetes-engine) by using the
6
[NVIDIA NeMo framework](https://github.com/NVIDIA/nemo).
7
0 commit comments