You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could we consider adding support for Megatron-MoE's ETP (Expert-Tensor Parallel) sharding?
Right now, when initializing expert-parallel in MoE, it inherits the non-MoE tensor-parallel group by default.