Skip to content

Commit ec193af

Browse files
committed
remove code from deleted notebook
1 parent c973d46 commit ec193af

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

articles/machine-learning/how-to-train-distributed-gpu.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,6 @@ To run distributed training using MPI, follow these steps:
4545
1. Define a `command` with `instance_count`. `instance_count` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script is responsible for launching the processes per node.
4646
1. Use the `distribution` parameter of the `command` to specify settings for `MpiDistribution`.
4747

48-
[!notebook-python[](~/azureml-examples-temp-fix/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb?name=job)]
4948

5049
### Horovod
5150

0 commit comments

Comments
 (0)