Skip to content

Commit 4675fd8

Browse files
Remove a deprecation warning.
The `experimental` endpoint was moved to `tf.keras.callbacks.BackupAndRestore` PiperOrigin-RevId: 421651977
1 parent 5d97de5 commit 4675fd8

File tree

4 files changed

+11
-11
lines changed

4 files changed

+11
-11
lines changed

site/en/guide/migrate/checkpoint_saver.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@
7777
"- Save continually at a certain frequency (using the `save_freq` argument).\n",
7878
"- Save the weights/parameters only instead of the whole model by setting `save_weights_only` to `True`.\n",
7979
"\n",
80-
"For more details, refer to the `tf.keras.callbacks.ModelCheckpoint` API docs and the *Save checkpoints during training* section in the [Save and load models](../../tutorials/keras/save_and_load.ipynb) tutorial. Learn more about the Checkpoint format in the *TF Checkpoint format* section in the [Save and load Keras models](https://www.tensorflow.org/guide/keras/save_and_serialize) guide. In addition, to add fault tolerance, you can use `tf.keras.callbacks.experimental.BackupAndRestore` or `tf.train.Checkpoint` for manual checkpointing. Learn more in the [Fault tolerance migration guide](fault_tolerance.ipynb).\n",
80+
"For more details, refer to the `tf.keras.callbacks.ModelCheckpoint` API docs and the *Save checkpoints during training* section in the [Save and load models](../../tutorials/keras/save_and_load.ipynb) tutorial. Learn more about the Checkpoint format in the *TF Checkpoint format* section in the [Save and load Keras models](https://www.tensorflow.org/guide/keras/save_and_serialize) guide. In addition, to add fault tolerance, you can use `tf.keras.callbacks.BackupAndRestore` or `tf.train.Checkpoint` for manual checkpointing. Learn more in the [Fault tolerance migration guide](fault_tolerance.ipynb).\n",
8181
"\n",
8282
"Keras [callbacks](https://www.tensorflow.org/guide/keras/custom_callback) are objects that are called at different points during training/evaluation/prediction in the built-in Keras `Model.fit`/`Model.evaluate`/`Model.predict` APIs. Learn more in the _Next steps_ section at the end of the guide."
8383
]
@@ -273,7 +273,7 @@
273273
"\n",
274274
"You may also find the following migration-related resources useful:\n",
275275
"\n",
276-
"- The [Fault tolerance migration guide](fault_tolerance.ipynb): `tf.keras.callbacks.experimental.BackupAndRestore` for `Model.fit`, or `tf.train.Checkpoint` and `tf.train.CheckpointManager` APIs for a custom training loop\n",
276+
"- The [Fault tolerance migration guide](fault_tolerance.ipynb): `tf.keras.callbacks.BackupAndRestore` for `Model.fit`, or `tf.train.Checkpoint` and `tf.train.CheckpointManager` APIs for a custom training loop\n",
277277
"- The [Early stopping migration guide](early_stopping.ipynb): `tf.keras.callbacks.EarlyStopping` is a built-in early stopping callback\n",
278278
"- The [TensorBoard migration guide](tensorboard.ipynb): TensorBoard enables tracking and displaying metrics\n",
279279
"- The [LoggingTensorHook and StopAtStepHook to Keras callbacks migration guide](logging_stop_hook.ipynb)\n",

site/en/guide/migrate/fault_tolerance.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"\n",
7070
"This guide first demonstrates how to add fault tolerance to training with `tf.estimator.Estimator` in TensorFlow 1 by specifying metric saving with `tf.estimator.RunConfig`. Then, you will learn how to implement fault tolerance for training in Tensorflow 2 in two ways:\n",
7171
"\n",
72-
"- If you use the Keras `Model.fit` API, you can pass the `tf.keras.callbacks.experimental.BackupAndRestore` callback to it.\n",
72+
"- If you use the Keras `Model.fit` API, you can pass the `tf.keras.callbacks.BackupAndRestore` callback to it.\n",
7373
"- If you use a custom training loop (with `tf.GradientTape`), you can arbitrarily save checkpoints using the `tf.train.Checkpoint` and `tf.train.CheckpointManager` APIs.\n",
7474
"\n",
7575
"Both of these methods will back up and restore the training states in [checkpoint](../../guide/checkpoint.ipynb) files.\n"
@@ -252,7 +252,7 @@
252252
"source": [
253253
"## TensorFlow 2: Back up and restore with a callback and Model.fit\n",
254254
"\n",
255-
"In TensorFlow 2, if you use the Keras `Model.fit` API for training, you can provide the `tf.keras.callbacks.experimental.BackupAndRestore` callback to add the fault tolerance functionality.\n",
255+
"In TensorFlow 2, if you use the Keras `Model.fit` API for training, you can provide the `tf.keras.callbacks.BackupAndRestore` callback to add the fault tolerance functionality.\n",
256256
"\n",
257257
"To help demonstrate this, let's first start by defining a callback class that artificially throws an error during the fifth checkpoint:\n"
258258
]
@@ -278,7 +278,7 @@
278278
"id": "AhU3VTYZoDh-"
279279
},
280280
"source": [
281-
"Then, define and instantiate a simple Keras model, define the loss function, call `Model.compile`, and set up a `tf.keras.callbacks.experimental.BackupAndRestore` callback that will save the checkpoints in a temporary directory:"
281+
"Then, define and instantiate a simple Keras model, define the loss function, call `Model.compile`, and set up a `tf.keras.callbacks.BackupAndRestore` callback that will save the checkpoints in a temporary directory:"
282282
]
283283
},
284284
{
@@ -307,7 +307,7 @@
307307
"\n",
308308
"log_dir = tempfile.mkdtemp()\n",
309309
"\n",
310-
"backup_restore_callback = tf.keras.callbacks.experimental.BackupAndRestore(\n",
310+
"backup_restore_callback = tf.keras.callbacks.BackupAndRestore(\n",
311311
" backup_dir = log_dir\n",
312312
")"
313313
]
@@ -452,7 +452,7 @@
452452
"\n",
453453
"To learn more about fault tolerance and checkpointing in TensorFlow 2, consider the following documentation:\n",
454454
"\n",
455-
"- The `tf.keras.callbacks.experimental.BackupAndRestore` callback API docs.\n",
455+
"- The `tf.keras.callbacks.BackupAndRestore` callback API docs.\n",
456456
"- The `tf.train.Checkpoint` and `tf.train.CheckpointManager` API docs.\n",
457457
"- The [Training checkpoints](../../guide/checkpoint.ipynb) guide, including the _Writing checkpoints_ section.\n",
458458
"\n",

site/en/tutorials/distribute/multi_worker_with_keras.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1119,11 +1119,11 @@
11191119
"source": [
11201120
"#### BackupAndRestore callback\n",
11211121
"\n",
1122-
"The `tf.keras.callbacks.experimental.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.\n",
1122+
"The `tf.keras.callbacks.BackupAndRestore` callback provides the fault tolerance functionality by backing up the model and current epoch number in a temporary checkpoint file under `backup_dir` argument to `BackupAndRestore`. This is done at the end of each epoch.\n",
11231123
"\n",
11241124
"Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.\n",
11251125
"\n",
1126-
"To use it, provide an instance of `tf.keras.callbacks.experimental.BackupAndRestore` at the `Model.fit` call.\n",
1126+
"To use it, provide an instance of `tf.keras.callbacks.BackupAndRestore` at the `Model.fit` call.\n",
11271127
"\n",
11281128
"With `MultiWorkerMirroredStrategy`, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then, the training continues.\n",
11291129
"\n",
@@ -1144,7 +1144,7 @@
11441144
"# Multi-worker training with MultiWorkerMirroredStrategy\n",
11451145
"# and the BackupAndRestore callback.\n",
11461146
"\n",
1147-
"callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]\n",
1147+
"callbacks = [tf.keras.callbacks.BackupAndRestore(backup_dir='/tmp/backup')]\n",
11481148
"with strategy.scope():\n",
11491149
" multi_worker_model = mnist.build_and_compile_cnn_model()\n",
11501150
"multi_worker_model.fit(multi_worker_dataset,\n",

site/en/tutorials/distribute/parameter_server_training.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -476,7 +476,7 @@
476476
"callbacks = [\n",
477477
" tf.keras.callbacks.TensorBoard(log_dir=log_dir),\n",
478478
" tf.keras.callbacks.ModelCheckpoint(filepath=ckpt_filepath),\n",
479-
" tf.keras.callbacks.experimental.BackupAndRestore(backup_dir=backup_dir),\n",
479+
" tf.keras.callbacks.BackupAndRestore(backup_dir=backup_dir),\n",
480480
"]\n",
481481
"\n",
482482
"model.fit(dc, epochs=5, steps_per_epoch=20, callbacks=callbacks)"

0 commit comments

Comments
 (0)