Finetuning with multiple GPUs #385
francoisabcd
started this conversation in
General
Replies: 2 comments
-
|
It might be possible to enable fine-tuning on multiple GPUs via But we have not tested this functionality and currently do not plan to actively work on multi-GPU support. Of course, feel free to share your experience and contributions are always welcome |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
@francoisabcd Here's a MWE of fine-tuning Chronos-2 on multiple GPUs: import numpy as np
import torch.distributed as dist
from chronos import Chronos2Pipeline
def get_rank():
if dist.is_initialized() and dist.is_torchelastic_launched():
return dist.get_rank()
return 0
def cleanup_ddp():
if dist.is_torchelastic_launched():
dist.destroy_process_group()
def generate_data(num_items: int = 10_000):
rng = np.random.default_rng(seed=42 + get_rank())
train_data = [{"target": rng.normal(size=2048)} for i in range(num_items)]
return train_data
def main():
train_data = generate_data()
pipeline = Chronos2Pipeline.from_pretrained("amazon/chronos-2", device_map="cuda")
pipeline.fit(train_data, context_length=512, prediction_length=64, ddp_find_unused_parameters=False, num_steps=10)
cleanup_ddp()
if __name__ == "__main__":
main()You can run it like: torchrun --standalone --nproc-per-node=4 example.py |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I have tried to fine tune the model using multiple GPUs, the code is not working.
It works if I only use one GPU.
Is it possible to finetune on multiple GPUs?
Thank you,
Beta Was this translation helpful? Give feedback.
All reactions