@@ -331,7 +331,7 @@ class TunerSpec(ComponentSpec):
331331 }
332332```
333333
334- The KerasTuner library allows users to config
334+ The KerasTuner library allows users to configure
335335[ ` tf.distribute.Strategy ` ] ( https://www.tensorflow.org/tutorials/distribute/kerass )
336336if they are using
337337[ ` kerastuner.Tuner ` ] ( https://github.com/keras-team/keras-tuner/blob/1.0.0/kerastuner/engine/tuner.py )
@@ -340,15 +340,15 @@ training) is executed in a single worker, as such only single machine strategy
340340is allowed. To support multi-worker distributed training, we need to be able to
341341execute the trial (training) on different workers.
342342
343- At the time of writing, KerasTuner library can be used for parallel tuning with
344- single machine ` tf.distribute.Strategy ` , e.g.,
343+ At the time of writing, the KerasTuner library can be used for parallel
344+ execution of trail with a single machine ` tf.distribute.Strategy ` , e.g.,
345345[ ` MirroredStrategy ` ] ( https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy ) ,
346346multi-worker strategy (distributed training for trial) support is on the
347347[ roadmap of KerasTuner] ( https://keras-team.github.io/keras-tuner/tutorials/distributed-tuning/#data-parallelism-with-tfdistribute ) .
348- Note that cluster managment of multiple worker machines is part of neither
349- the KerasTuner library, nor is planned to be .
348+ Note that cluster management of multiple worker machines is not part of the
349+ KerasTuner library.
350350
351- At the time of writing, TFX doesn’t have the ability to manage the multi-worker
351+ At the time of writing, TFX does not have the ability to manage the multi-worker
352352cluster and the centralized optimization service. As a result parallel tuning or
353353distributed training is not supported natively in TFX (local or on-prem). In
354354the next section, we will discuss the integration for Google Cloud. Similar
0 commit comments