Skip to content

Commit be569e3

Browse files
Merge pull request #1590 from lamberta:nbfmt
PiperOrigin-RevId: 314176944
2 parents 5c23ab6 + 7931afd commit be569e3

10 files changed

+27
-34
lines changed

site/en/guide/autodiff.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -458,8 +458,7 @@
458458
"\n",
459459
"For writing a new op, you can use `tf.RegisterGradient` to set up your own. See that page for details. (Note that the gradient registry is global, so change it with caution.)\n",
460460
"\n",
461-
"For the latter two cases, you can use `tf.custom_gradient`. Here is an example that applies `tf.clip_by_norm` to the gradient.\n",
462-
"\n"
461+
"For the latter two cases, you can use `tf.custom_gradient`. Here is an example that applies `tf.clip_by_norm` to the gradient.\n"
463462
]
464463
},
465464
{

site/en/guide/concrete_function.ipynb

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
"cell_type": "code",
1515
"execution_count": 0,
1616
"metadata": {
17+
"cellView": "form",
1718
"colab": {},
1819
"colab_type": "code",
1920
"id": "V1-OvloqK4CX"

site/en/guide/function.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
"cell_type": "code",
1515
"execution_count": 0,
1616
"metadata": {
17-
"cellView": "both",
17+
"cellView": "form",
1818
"colab": {},
1919
"colab_type": "code",
2020
"id": "3jTMb1dySr3V"

site/en/guide/keras/save_and_serialize.ipynb

Lines changed: 7 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@
102102
"id": "6soUbInX_4vy"
103103
},
104104
"source": [
105-
"## The short answer to saving \u0026 loading\n",
105+
"## The short answer to saving and loading\n",
106106
"\n",
107107
"### Saving a Keras model\n",
108108
"\n",
@@ -153,7 +153,7 @@
153153
"id": "6dZOzKILruaJ"
154154
},
155155
"source": [
156-
"## Whole-model saving \u0026 loading\n",
156+
"## Whole-model saving and loading\n",
157157
"\n",
158158
"You can save an entire model to a single artifact. It will include:\n",
159159
"\n",
@@ -376,14 +376,12 @@
376376
"id": "y7LUzyZVD2kE"
377377
},
378378
"source": [
379-
"\n",
380379
"#### Limitations\n",
381380
"\n",
382381
"Compared to the SavedModel format, there are two things that don't get included in the H5 file:\n",
383382
"\n",
384-
"- **External losses \u0026 metrics** added via `model.add_loss()` \u0026 `model.add_metric()` are not saved (unlike SavedModel). If you have such losses \u0026 metrics on your model and you want to resume training, you need to add these losses back yourself after loading the model. Note that this does not apply to losses/metrics created *inside* layers via `self.add_loss()` \u0026 `self.add_metric()`. As long as the layer gets loaded, these losses \u0026 metrics are kept, since they are part of the `call` method of the layer.\n",
385-
"- The **computation graph of custom objects** such as custom layers is not included in the saved file. At loading time, Keras will need access to the Python classes/functions of these objects in order to reconstruct the model. See [Custom objects](save_and_serialize.ipynb#custom-objects).\n",
386-
"\n"
383+
"- **External losses and metrics** added via `model.add_loss()` and `model.add_metric()` are not saved (unlike SavedModel). If you have such losses and metrics on your model and you want to resume training, you need to add these losses back yourself after loading the model. Note that this does not apply to losses/metrics created *inside* layers via `self.add_loss()` and `self.add_metric()`. As long as the layer gets loaded, these losses and metrics are kept, since they are part of the `call` method of the layer.\n",
384+
"- The **computation graph of custom objects** such as custom layers is not included in the saved file. At loading time, Keras will need access to the Python classes/functions of these objects in order to reconstruct the model. See [Custom objects](save_and_serialize.ipynb#custom-objects).\n"
387385
]
388386
},
389387
{
@@ -722,9 +720,9 @@
722720
"id": "wwCxkE6RyyPy"
723721
},
724722
"source": [
725-
"## Saving \u0026 loading only the model's weights values\n",
723+
"## Saving and loading only the model's weights values\n",
726724
"\n",
727-
"You can choose to only save \u0026 load a model's weights. This can be useful if:\n",
725+
"You can choose to only save and load a model's weights. This can be useful if:\n",
728726
"\n",
729727
"- You only need the model for inference: in this case you won't need to restart training, so you don't need the compilation information or optimizer state.\n",
730728
"- You are doing transfer learning: in this case you will be training a new model reusing the state of a prior model, so you don't need the compilation information of the prior model.\n"
@@ -874,7 +872,7 @@
874872
"id": "opP1KROHwWwd"
875873
},
876874
"source": [
877-
"### APIs for saving weights to disk \u0026 loading them back\n",
875+
"### APIs for saving weights to disk and loading them back\n",
878876
"\n",
879877
"Weights can be saved to disk by calling `model.save_weights` in the following formats:\n",
880878
"* TensorFlow Checkpoint \n",
@@ -1123,7 +1121,6 @@
11231121
"id": "09aVEG1VEOZe"
11241122
},
11251123
"source": [
1126-
"\n",
11271124
"Caution: Calling `model.load_weights('pretrained_ckpt')` won't throw an error, but will *not* work as expected. If you inspect the weights, you'll see that none of the weights will have loaded. `pretrained_model.load_weights()` is the\n",
11281125
"correct method to call.\n"
11291126
]

site/en/guide/keras/train_and_evaluate.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -76,10 +76,10 @@
7676
"source": [
7777
"This guide covers training, evaluation, and prediction (inference) models in TensorFlow 2.0 in two broad situations:\n",
7878
"\n",
79-
"- When using built-in APIs for training \u0026 validation (such as `model.fit()`, `model.evaluate()`, `model.predict()`). This is covered in the section **\"Using built-in training \u0026 evaluation loops\"**.\n",
80-
"- When writing custom loops from scratch using eager execution and the `GradientTape` object. This is covered in the section **\"Writing your own training \u0026 evaluation loops from scratch\"**.\n",
79+
"- When using built-in APIs for training and validation (such as `model.fit()`, `model.evaluate()`, `model.predict()`). This is covered in the section **\"Using built-in training and evaluation loops\"**.\n",
80+
"- When writing custom loops from scratch using eager execution and the `GradientTape` object. This is covered in the section **\"Writing your own training and evaluation loops from scratch\"**.\n",
8181
"\n",
82-
"In general, whether you are using built-in loops or writing your own, model training \u0026 evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing.\n",
82+
"In general, whether you are using built-in loops or writing your own, model training and evaluation works strictly in the same way across every kind of Keras model -- Sequential models, models built with the Functional API, and models written from scratch via model subclassing.\n",
8383
"\n",
8484
"This guide doesn't cover distributed training."
8585
]
@@ -116,7 +116,7 @@
116116
"id": "052DbsQ175lP"
117117
},
118118
"source": [
119-
"## Part I: Using built-in training \u0026 evaluation loops\n",
119+
"## Part I: Using built-in training and evaluation loops\n",
120120
"\n",
121121
"When passing data to the built-in training loops of a model, you should either use **Numpy arrays** (if your data is small and fits in memory) or **tf.data Dataset** objects. In the next few paragraphs, we'll use the MNIST dataset as Numpy arrays, in order to demonstrate how to use optimizers, losses, and metrics."
122122
]
@@ -808,7 +808,7 @@
808808
"id": "Uq0TDb15DBbc"
809809
},
810810
"source": [
811-
"### Training \u0026 evaluation from tf.data Datasets\n",
811+
"### Training and evaluation from tf.data Datasets\n",
812812
"\n",
813813
"In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the `validation_data` and `validation_split` arguments in `fit`, when your data is passed as Numpy arrays.\n",
814814
"\n",
@@ -1646,9 +1646,9 @@
16461646
"id": "5r5ZnFry7-B7"
16471647
},
16481648
"source": [
1649-
"## Part II: Writing your own training \u0026 evaluation loops from scratch\n",
1649+
"## Part II: Writing your own training and evaluation loops from scratch\n",
16501650
"\n",
1651-
"If you want lower-level over your training \u0026 evaluation loops than what `fit()` and `evaluate()` provide, you should write your own. It's actually pretty simple! But you should be ready to have a lot more debugging to do on your own."
1651+
"If you want lower-level over your training and evaluation loops than what `fit()` and `evaluate()` provide, you should write your own. It's actually pretty simple! But you should be ready to have a lot more debugging to do on your own."
16521652
]
16531653
},
16541654
{

site/en/guide/migrate.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1679,8 +1679,8 @@
16791679
{
16801680
"cell_type": "markdown",
16811681
"metadata": {
1682-
"id": "RBoa-xXPs4rD",
1683-
"colab_type": "text"
1682+
"colab_type": "text",
1683+
"id": "RBoa-xXPs4rD"
16841684
},
16851685
"source": [
16861686
"Note: We do not support creating weighted metrics in Keras and converting them to weighted metrics in the Estimator API using `model_to_estimator` You will have to create these metrics directly on the estimator spec using the `add_metrics` function."

site/en/guide/ragged_tensor.ipynb

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -581,7 +581,7 @@
581581
"marker = tf.fill([queries.nrows(), 1], '#')\n",
582582
"padded = tf.concat([marker, queries, marker], axis=1) # ②\n",
583583
"\n",
584-
"# Build word bigrams \u0026 look up embeddings.\n",
584+
"# Build word bigrams & look up embeddings.\n",
585585
"bigrams = tf.strings.join([padded[:, :-1], padded[:, 1:]], separator='+') # ③\n",
586586
"\n",
587587
"bigram_buckets = tf.strings.to_hash_bucket_fast(bigrams, num_buckets)\n",
@@ -1175,8 +1175,7 @@
11751175
"source": [
11761176
"#### Concrete functions\n",
11771177
"\n",
1178-
"[Concrete functions](https://www.tensorflow.org/guide/concrete_function) encapsulate individual traced graphs that are built by `tf.function`. Starting with TensorFlow 2.3 (and in `tf-nightly`), ragged tensors can be used transparently with concrete functions.\n",
1179-
"\n"
1178+
"[Concrete functions](https://www.tensorflow.org/guide/concrete_function) encapsulate individual traced graphs that are built by `tf.function`. Starting with TensorFlow 2.3 (and in `tf-nightly`), ragged tensors can be used transparently with concrete functions.\n"
11801179
]
11811180
},
11821181
{
@@ -1242,8 +1241,7 @@
12421241
"source": [
12431242
"### SavedModels\n",
12441243
"\n",
1245-
"A [SavedModel](https://www.tensorflow.org/guide/saved_model) is a serialized TensorFlow program, including both weights and computation. It can be built from a Keras model or from a custom model. In either case, ragged tensors can be used transparently with the functions and methods defined by a SavedModel.\n",
1246-
"\n"
1244+
"A [SavedModel](https://www.tensorflow.org/guide/saved_model) is a serialized TensorFlow program, including both weights and computation. It can be built from a Keras model or from a custom model. In either case, ragged tensors can be used transparently with the functions and methods defined by a SavedModel.\n"
12471245
]
12481246
},
12491247
{
@@ -1393,7 +1391,7 @@
13931391
"\n",
13941392
"Ragged tensors overload the same set of operators as normal `Tensor`s: the unary\n",
13951393
"operators `-`, `~`, and `abs()`; and the binary operators `+`, `-`, `*`, `/`,\n",
1396-
"`//`, `%`, `**`, `\u0026`, `|`, `^`, `==`, `<`, `<=`, `>`, and `>=`.\n"
1394+
"`//`, `%`, `**`, `&`, `|`, `^`, `==`, `<`, `<=`, `>`, and `>=`.\n"
13971395
]
13981396
},
13991397
{

site/en/guide/saved_model.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@
321321
"nohup tensorflow_model_server \\\n",
322322
" --rest_api_port=8501 \\\n",
323323
" --model_name=mobilenet \\\n",
324-
" --model_base_path=\"/tmp/mobilenet\" >server.log 2>\u00261\n",
324+
" --model_base_path=\"/tmp/mobilenet\" >server.log 2>&1\n",
325325
"```\n",
326326
"\n",
327327
" Then send a request.\n",
@@ -917,7 +917,7 @@
917917
"SavedModelBundle bundle;\n",
918918
"...\n",
919919
"LoadSavedModel(session_options, run_options, export_dir, {kSavedModelTagTrain},\n",
920-
" \u0026bundle);\n",
920+
" &bundle);\n",
921921
"```"
922922
]
923923
},

site/en/guide/tpu.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -467,8 +467,7 @@
467467
"### Improving performance by multiple steps within `tf.function`\n",
468468
"The performance can be improved by running multiple steps within a `tf.function`. This is achieved by wrapping the `strategy.run` call with a `tf.range` inside `tf.function`, AutoGraph will convert it to a `tf.while_loop` on the TPU worker.\n",
469469
"\n",
470-
"Although with better performance, there are tradeoffs comparing with a single step inside `tf.function`. Running multiple steps in a `tf.function` is less flexible, you cannot run things eagerly or arbitrary python code within the steps.\n",
471-
"\n"
470+
"Although with better performance, there are tradeoffs comparing with a single step inside `tf.function`. Running multiple steps in a `tf.function` is less flexible, you cannot run things eagerly or arbitrary python code within the steps.\n"
472471
]
473472
},
474473
{

site/en/guide/variable.ipynb

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -197,7 +197,6 @@
197197
"id": "qbLCcG6Pc29Y"
198198
},
199199
"source": [
200-
"\n",
201200
"As noted above, variables are backed by tensors. You can reassign the tensor using `tf.Variable.assign`. Calling `assign` does not (usually) allocate a new tensor; instead, the existing tensor's memory is reused."
202201
]
203202
},
@@ -415,7 +414,7 @@
415414
"metadata": {
416415
"colab": {
417416
"collapsed_sections": [],
418-
"name": "intro_to_variables.ipynb",
417+
"name": "variable.ipynb",
419418
"private_outputs": true,
420419
"provenance": [],
421420
"toc_visible": true

0 commit comments

Comments
 (0)