generated from amazon-archives/__template_MIT-0
-
Notifications
You must be signed in to change notification settings - Fork 43
Open
Description
There are several use cases that require benchmarking, which scopes out of just inferences. This issue is created for integrating training benchmarking into FMBench.
We will experiment with training a llama3-8b on trn1.32xlarge using Hugging Face's Optimum Neuron library. Optimum Neuron is the interface between the Transformers library and AWS Accelerators. It provides a set of tools enabling easy model loading, training and inference on single- and multi-Accelerator settings for different downstream tasks. The list of officially validated models and tasks is available here. Users can try other models and tasks with only few changes.
Link: https://huggingface.co/docs/optimum-neuron/en/training_tutorials/finetune_llm
Metadata
Metadata
Assignees
Labels
No labels