Skip to content
This repository was archived by the owner on Jul 10, 2025. It is now read-only.

Commit 0d45899

Browse files
committed
Reset redundant phrase.
1 parent 19a2c78 commit 0d45899

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

rfcs/20200420-tfx-tuner-component.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -331,7 +331,7 @@ class TunerSpec(ComponentSpec):
331331
}
332332
```
333333

334-
The KerasTuner library allows users to config
334+
The KerasTuner library allows users to configure
335335
[`tf.distribute.Strategy`](https://www.tensorflow.org/tutorials/distribute/kerass)
336336
if they are using
337337
[`kerastuner.Tuner`](https://github.com/keras-team/keras-tuner/blob/1.0.0/kerastuner/engine/tuner.py)
@@ -340,15 +340,15 @@ training) is executed in a single worker, as such only single machine strategy
340340
is allowed. To support multi-worker distributed training, we need to be able to
341341
execute the trial (training) on different workers.
342342

343-
At the time of writing, KerasTuner library can be used for parallel tuning with
344-
single machine `tf.distribute.Strategy`, e.g.,
343+
At the time of writing, the KerasTuner library can be used for parallel
344+
execution of trail with a single machine `tf.distribute.Strategy`, e.g.,
345345
[`MirroredStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy),
346346
multi-worker strategy (distributed training for trial) support is on the
347347
[roadmap of KerasTuner](https://keras-team.github.io/keras-tuner/tutorials/distributed-tuning/#data-parallelism-with-tfdistribute).
348-
Note that cluster managment of multiple worker machines is part of neither
349-
the KerasTuner library, nor is planned to be.
348+
Note that cluster management of multiple worker machines is not part of the
349+
KerasTuner library.
350350

351-
At the time of writing, TFX doesn’t have the ability to manage the multi-worker
351+
At the time of writing, TFX does not have the ability to manage the multi-worker
352352
cluster and the centralized optimization service. As a result parallel tuning or
353353
distributed training is not supported natively in TFX (local or on-prem). In
354354
the next section, we will discuss the integration for Google Cloud. Similar

0 commit comments

Comments
 (0)