Skip to content

Commit 63e1d78

Browse files
Merge branch 'master' into zenflow_blog
2 parents 2e11658 + 0e51e09 commit 63e1d78

File tree

3 files changed

+13
-0
lines changed

3 files changed

+13
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
[![Twitter](https://img.shields.io/twitter/follow/DeepSpeedAI)](https://twitter.com/intent/follow?screen_name=DeepSpeedAI)
77
[![Japanese Twitter](https://img.shields.io/badge/%E6%97%A5%E6%9C%AC%E8%AA%9ETwitter-%40DeepSpeedAI_JP-blue)](https://twitter.com/DeepSpeedAI_JP)
88
[![Chinese Zhihu](https://img.shields.io/badge/%E7%9F%A5%E4%B9%8E-%E5%BE%AE%E8%BD%AFDeepSpeed-blue)](https://www.zhihu.com/people/deepspeed)
9+
[![Slack](https://img.shields.io/badge/Slack-4A154B?style=for-the-badge&logo=slack&logoColor=white)](https://join.slack.com/t/deepspeedworkspace/shared_invite/zt-3a8pjd8dd-PCj2hMvR4Y2syPwVnjEoww)
910

1011

1112
<div align="center">

deepspeed/runtime/engine.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -730,6 +730,15 @@ def random_ltd_initialize(self):
730730
raise ValueError(f'not yet support')
731731
#self.lr_scheduler = lr_schedules.WarmupLayerTokenDecayLR(self.optimizer, self.random_ltd_scheduler)
732732

733+
def get_data_parallel_rank(self):
734+
return groups.get_data_parallel_rank()
735+
736+
def get_tensor_parallel_rank(self):
737+
return groups.get_tensor_model_parallel_rank()
738+
739+
def get_model_parallel_rank(self):
740+
return groups.get_model_parallel_rank()
741+
733742
def get_sequence_parallel_group(self):
734743
return self.seq_parallel_group
735744

deepspeed/runtime/pipe/engine.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -535,6 +535,9 @@ def is_last_stage(self):
535535
"""True if this process is in the last stage in the pipeline."""
536536
return self.stage_id == self.num_stages - 1
537537

538+
def get_pipeline_parallel_rank(self):
539+
return self.stage_id
540+
538541
def _reduce_outputs(self, outputs, reduce='avg', reduce_dp=True, micro_batches=None):
539542
if reduce is None:
540543
return outputs

0 commit comments

Comments
 (0)