@@ -28,14 +28,13 @@ Welcome to the reproducible benchmark recipes repository for GPUs! This reposito
2828
2929### Training benchmarks A3 Ultra
3030
31- | Models | GPU Machine Type | Framework | Workload Type | Orchestrator | Link to the recipe |
32- | ---------------- | ---------------- | --------- | ------------------- | ------------ | ------------------ |
33- | ** Llama-3.1-70B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | MaxText | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-70b/maxtext-pretraining-gke/README.md )
34- | ** Llama-3.1-70B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | NeMo | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-70b/nemo-pretraining-gke/README.md )
35- | ** Llama-3.1-405B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | NeMo | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-405b/maxtext-pretraining-gke/README.md )
36- | ** Mixtral-8-7B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | MaxText | Pre-training | GKE | [ Link] ( ./training/a3ultra/mixtral-8x7b/maxtext-pretraining-gke/README.md )
37- | ** Mixtral-8-7B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | NeMo | Pre-training | GKE | [ Link] ( ./training/a3ultra/mixtral-8x7b/nemo-pretraining-gke/README.md ) |
38-
31+ Models | GPU Machine Type | Framework | Workload Type | Orchestrator | Link to the recipe
32+ ------------------ | ----------------------------------------------------------------------------------------------------------- | --------- | ------------- | ------------ | ------------------
33+ ** Llama-3.1-70B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | MaxText | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-70b/maxtext-pretraining-gke/README.md )
34+ ** Llama-3.1-70B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | NeMo | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-70b/nemo-pretraining-gke/README.md )
35+ ** Llama-3.1-405B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | MaxText | Pre-training | GKE | [ Link] ( ./training/a3ultra/llama-3.1-405b/maxtext-pretraining-gke/README.md )
36+ ** Mixtral-8-7B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | MaxText | Pre-training | GKE | [ Link] ( ./training/a3ultra/mixtral-8x7b/maxtext-pretraining-gke/README.md )
37+ ** Mixtral-8-7B** | [ A3 Ultra (NVIDIA H200)] ( https://cloud.google.com/compute/docs/accelerator-optimized-machines#a3-ultra-vms ) | NeMo | Pre-training | GKE | [ Link] ( ./training/a3ultra/mixtral-8x7b/nemo-pretraining-gke/README.md )
3938
4039### Inference benchmarks A3 Mega
4140
0 commit comments