Hello, I have carefully reviewed the previous issues and found that you don't seem to plan to support single-card multi-batch training for various models. But, may I ask, can I simply replace:
accelerate launch \ examples/flux/model_training/train.py \
with
accelerate launch \ --multi_gpu \ --num_processes 3 \ --gpu_ids 5,6,7 \ --mixed_precision bf16 \ examples/flux/model_training/train.py \
to achieve multi-GPU training, and will subsequent multi-machine training also be done by directly modifying this, while ensuring the training logic is completely correct?