Skip to content

Commit d2db9b3

Browse files
authored
Update CommunicationOptions link, revert BackupAndRestore to stable for v2.8
1 parent 789a89c commit d2db9b3

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

site/en/tutorials/distribute/multi_worker_with_keras.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -459,7 +459,7 @@
459459
"id": "N0iv7SyyAohc"
460460
},
461461
"source": [
462-
"Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy()` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training."
462+
"Note: `TF_CONFIG` is parsed and TensorFlow's GRPC servers are started at the time `MultiWorkerMirroredStrategy` is called, so the `TF_CONFIG` environment variable must be set before a `tf.distribute.Strategy` instance is created. Since `TF_CONFIG` is not set yet, the above strategy is effectively single-worker training."
463463
]
464464
},
465465
{
@@ -468,7 +468,7 @@
468468
"id": "FMy2VM4Akzpr"
469469
},
470470
"source": [
471-
"`MultiWorkerMirroredStrategy` provides multiple implementations via the [`CommunicationOptions`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/CommunicationOptions) parameter: 1) `RING` implements ring-based collectives using gRPC as the cross-host communication layer; 2) `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives; and 3) `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster."
471+
"`MultiWorkerMirroredStrategy` provides multiple implementations via the `tf.distribute.experimental.CommunicationOptions` parameter: 1) `RING` implements ring-based collectives using gRPC as the cross-host communication layer; 2) `NCCL` uses the [NVIDIA Collective Communication Library](https://developer.nvidia.com/nccl) to implement collectives; and 3) `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster."
472472
]
473473
},
474474
{
@@ -1145,7 +1145,7 @@
11451145
"# Multi-worker training with `MultiWorkerMirroredStrategy`\n",
11461146
"# and the `BackupAndRestore` callback.\n",
11471147
"\n",
1148-
"callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]\n",
1148+
"callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]\n",
11491149
"with strategy.scope():\n",
11501150
" multi_worker_model = mnist_setup.build_and_compile_cnn_model()\n",
11511151
"multi_worker_model.fit(multi_worker_dataset,\n",

0 commit comments

Comments
 (0)