You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: torchtitan/experiments/torchcomms/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,10 @@ Locally tested with:
25
25
-**FSDP** (`fully_shard`) - Fully Sharded Data Parallel
26
26
-**TP** - Tensor Parallelism
27
27
-**PP** - Pipeline Parallelism
28
+
-**CP** - Context Parallelism
28
29
-**EP** - Expert Parallelism
29
30
-**compile** - `torch.compile` integration
31
+
-**Async TP** - Async TP integration
30
32
31
33
### Performance
32
34
@@ -46,8 +48,6 @@ Locally tested with:
46
48
47
49
### Known Issues
48
50
49
-
-**CP** (Context Parallelism) - Temporarily not working. Work in progress.
50
-
-**Async TP** - Temporarily not working. Work in progress.
51
51
-**Memory Overhead** - TorchComms requires higher peak memory usage. As a workaround, we need to reduce `local_batch_size` to avoid out of memory error.
0 commit comments