@@ -331,7 +331,7 @@ class TunerSpec(ComponentSpec):
331331 }
332332```
333333
334- The KerasTuner library allows users to config
334+ The KerasTuner library allows users to configure
335335[ ` tf.distribute.Strategy ` ] ( https://www.tensorflow.org/tutorials/distribute/kerass )
336336if they are using
337337[ ` kerastuner.Tuner ` ] ( https://github.com/keras-team/keras-tuner/blob/1.0.0/kerastuner/engine/tuner.py )
@@ -340,15 +340,17 @@ training) is executed in a single worker, as such only single machine strategy
340340is allowed. To support multi-worker distributed training, we need to be able to
341341execute the trial (training) on different workers.
342342
343- At the time of writing, KerasTuner library can be used for parallel tuning with
344- single machine ` tf.distribute.Strategy ` , e.g.,
345- [ ` MirroredStrategy ` ] ( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy )
346- , multi-worker strategy (distributed training for trial) support is on the
347- roadmap (note that cluster managing is not part of the library).
348-
349- At the time of writing, TFX doesn’t have the ability to manage the multi-worker
350- cluster and the centralized optimization service, so parallel tuning or
351- distributed training is not supported natively in TFX (local or on-prem), but in
343+ At the time of writing, the KerasTuner library can be used for parallel
344+ execution of trail with a single machine ` tf.distribute.Strategy ` , e.g.,
345+ [ ` MirroredStrategy ` ] ( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy ) ,
346+ multi-worker strategy (distributed training for trial) support is on the
347+ [ roadmap of KerasTuner] ( https://keras-team.github.io/keras-tuner/tutorials/distributed-tuning/#data-parallelism-with-tfdistribute ) .
348+ Note that cluster management of multiple worker machines is not part of the
349+ KerasTuner library.
350+
351+ At the time of writing, TFX does not have the ability to manage the multi-worker
352+ cluster and the centralized optimization service. As a result parallel tuning or
353+ distributed training is not supported natively in TFX (local or on-prem). In
352354the next section, we will discuss the integration for Google Cloud. Similar
353355parallel tuning support can be built for other execution environments.
354356
0 commit comments