Skip to content

Commit b9a59f5

Browse files
Add distributed options to docs (#4149) (#4152)
* add distributed options to docs * refine experimental to prototype
1 parent c37d65a commit b9a59f5

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

docs/tutorials/features/advanced_configuration.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,12 @@ The following launch options are supported in Intel® Extension for PyTorch\*. U
2525
| **Launch Option<br>Experimental** | **Default<br>Value** | **Description** |
2626
| ------ | ------ | ------ |
2727

28+
| **Distributed Option<br>GPU ONLY** | **Default<br>Value** | **Description** |
29+
| ------ | ------ | ------ |
30+
| TORCH_LLM_ALLREDUCE | 0 | This is a prototype feature to provide better scale-up performance by enabling optimized collective algorithms in oneCCL and asynchronous execution in torch-ccl. This feature requires XeLink enabled for cross-cards communication. By default, this feature is not enabled with setting 0. |
31+
| CCL_BLOCKING_WAIT | 0 | This is a prototype feature to control over whether collectives execution on XPU is host blocking or non-blocking. By default, setting 0 enables blocking behavior. |
32+
| CCL_SAME_STREAM | 0 | This is a prototype feature to allow using a computation stream as communication stream to minimize overhead for streams synchronization. By default, setting 0 uses separate streams for communication. |
33+
2834
For above launch options which can be configured to 1 or 0, users can configure them to ON or OFF also, while ON equals to 1 and OFF equals to 0.
2935

3036
Examples to configure the launch options:</br>

0 commit comments

Comments
 (0)