Skip to content
This repository was archived by the owner on Jul 10, 2025. It is now read-only.

Commit c7ee13a

Browse files
author
ematejska
authored
Merge pull request #273 from ucdmkt/patch-1
Fixing typo and refactor convoluted sentences
2 parents aae42c4 + 0d45899 commit c7ee13a

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

rfcs/20200420-tfx-tuner-component.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -331,7 +331,7 @@ class TunerSpec(ComponentSpec):
331331
}
332332
```
333333

334-
The KerasTuner library allows users to config
334+
The KerasTuner library allows users to configure
335335
[`tf.distribute.Strategy`](https://www.tensorflow.org/tutorials/distribute/kerass)
336336
if they are using
337337
[`kerastuner.Tuner`](https://github.com/keras-team/keras-tuner/blob/1.0.0/kerastuner/engine/tuner.py)
@@ -340,15 +340,17 @@ training) is executed in a single worker, as such only single machine strategy
340340
is allowed. To support multi-worker distributed training, we need to be able to
341341
execute the trial (training) on different workers.
342342

343-
At the time of writing, KerasTuner library can be used for parallel tuning with
344-
single machine `tf.distribute.Strategy`, e.g.,
345-
[`MirroredStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy)
346-
, multi-worker strategy (distributed training for trial) support is on the
347-
roadmap (note that cluster managing is not part of the library).
348-
349-
At the time of writing, TFX doesn’t have the ability to manage the multi-worker
350-
cluster and the centralized optimization service, so parallel tuning or
351-
distributed training is not supported natively in TFX (local or on-prem), but in
343+
At the time of writing, the KerasTuner library can be used for parallel
344+
execution of trail with a single machine `tf.distribute.Strategy`, e.g.,
345+
[`MirroredStrategy`](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy),
346+
multi-worker strategy (distributed training for trial) support is on the
347+
[roadmap of KerasTuner](https://keras-team.github.io/keras-tuner/tutorials/distributed-tuning/#data-parallelism-with-tfdistribute).
348+
Note that cluster management of multiple worker machines is not part of the
349+
KerasTuner library.
350+
351+
At the time of writing, TFX does not have the ability to manage the multi-worker
352+
cluster and the centralized optimization service. As a result parallel tuning or
353+
distributed training is not supported natively in TFX (local or on-prem). In
352354
the next section, we will discuss the integration for Google Cloud. Similar
353355
parallel tuning support can be built for other execution environments.
354356

0 commit comments

Comments
 (0)