@@ -342,13 +342,15 @@ execute the trial (training) on different workers.
342342
343343At the time of writing, KerasTuner library can be used for parallel tuning with
344344single machine ` tf.distribute.Strategy ` , e.g.,
345- [ ` MirroredStrategy ` ] ( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy )
346- , multi-worker strategy (distributed training for trial) support is on the
347- roadmap (note that cluster managing is not part of the library).
345+ [ ` MirroredStrategy ` ] ( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy ) ,
346+ multi-worker strategy (distributed training for trial) support is on the
347+ [ roadmap of KerasTuner] ( https://keras-team.github.io/keras-tuner/tutorials/distributed-tuning/#data-parallelism-with-tfdistribute ) .
348+ Note that cluster managment of multiple worker machines is part of neither
349+ the KerasTuner library, nor is planned to be.
348350
349351At the time of writing, TFX doesn’t have the ability to manage the multi-worker
350- cluster and the centralized optimization service, so parallel tuning or
351- distributed training is not supported natively in TFX (local or on-prem), but in
352+ cluster and the centralized optimization service. As a result parallel tuning or
353+ distributed training is not supported natively in TFX (local or on-prem). In
352354the next section, we will discuss the integration for Google Cloud. Similar
353355parallel tuning support can be built for other execution environments.
354356
0 commit comments