Skip to content

Commit 9efadf0

Browse files
Fix notebook failure with Keras 3.
PiperOrigin-RevId: 625072490
1 parent e99aab9 commit 9efadf0

File tree

1 file changed

+32
-15
lines changed

1 file changed

+32
-15
lines changed

site/en/tutorials/generative/autoencoder.ipynb

Lines changed: 32 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,16 @@
66
"id": "Ndo4ERqnwQOU"
77
},
88
"source": [
9-
"##### Copyright 2020 The TensorFlow Authors."
9+
"##### Copyright 2024 The TensorFlow Authors."
1010
]
1111
},
12+
{
13+
"metadata": {
14+
"id": "13rwRG5Jec7n"
15+
},
16+
"cell_type": "markdown",
17+
"source": []
18+
},
1219
{
1320
"cell_type": "code",
1421
"execution_count": null,
@@ -76,7 +83,7 @@
7683
"source": [
7784
"This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection.\n",
7885
"\n",
79-
"An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error. \n",
86+
"An autoencoder is a special type of neural network that is trained to copy its input to its output. For example, given an image of a handwritten digit, an autoencoder first encodes the image into a lower dimensional latent representation, then decodes the latent representation back to an image. An autoencoder learns to compress the data while minimizing the reconstruction error.\n",
8087
"\n",
8188
"To learn more about autoencoders, please consider reading chapter 14 from [Deep Learning](https://www.deeplearningbook.org/) by Ian Goodfellow, Yoshua Bengio, and Aaron Courville."
8289
]
@@ -117,7 +124,7 @@
117124
},
118125
"source": [
119126
"## Load the dataset\n",
120-
"To start, you will train the basic autoencoder using the Fashion MNIST dataset. Each image in this dataset is 28x28 pixels. "
127+
"To start, you will train the basic autoencoder using the Fashion MNIST dataset. Each image in this dataset is 28x28 pixels."
121128
]
122129
},
123130
{
@@ -169,7 +176,7 @@
169176
" layers.Dense(latent_dim, activation='relu'),\n",
170177
" ])\n",
171178
" self.decoder = tf.keras.Sequential([\n",
172-
" layers.Dense(tf.math.reduce_prod(shape), activation='sigmoid'),\n",
179+
" layers.Dense(tf.math.reduce_prod(shape).numpy(), activation='sigmoid'),\n",
173180
" layers.Reshape(shape)\n",
174181
" ])\n",
175182
"\n",
@@ -331,8 +338,8 @@
331338
"outputs": [],
332339
"source": [
333340
"noise_factor = 0.2\n",
334-
"x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape) \n",
335-
"x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape) \n",
341+
"x_train_noisy = x_train + noise_factor * tf.random.normal(shape=x_train.shape)\n",
342+
"x_test_noisy = x_test + noise_factor * tf.random.normal(shape=x_test.shape)\n",
336343
"\n",
337344
"x_train_noisy = tf.clip_by_value(x_train_noisy, clip_value_min=0., clip_value_max=1.)\n",
338345
"x_test_noisy = tf.clip_by_value(x_test_noisy, clip_value_min=0., clip_value_max=1.)"
@@ -657,7 +664,7 @@
657664
"id": "wVcTBDo-CqFS"
658665
},
659666
"source": [
660-
"Plot a normal ECG. "
667+
"Plot a normal ECG."
661668
]
662669
},
663670
{
@@ -721,12 +728,12 @@
721728
" layers.Dense(32, activation=\"relu\"),\n",
722729
" layers.Dense(16, activation=\"relu\"),\n",
723730
" layers.Dense(8, activation=\"relu\")])\n",
724-
" \n",
731+
"\n",
725732
" self.decoder = tf.keras.Sequential([\n",
726733
" layers.Dense(16, activation=\"relu\"),\n",
727734
" layers.Dense(32, activation=\"relu\"),\n",
728735
" layers.Dense(140, activation=\"sigmoid\")])\n",
729-
" \n",
736+
"\n",
730737
" def call(self, x):\n",
731738
" encoded = self.encoder(x)\n",
732739
" decoded = self.decoder(encoded)\n",
@@ -763,8 +770,8 @@
763770
},
764771
"outputs": [],
765772
"source": [
766-
"history = autoencoder.fit(normal_train_data, normal_train_data, \n",
767-
" epochs=20, \n",
773+
"history = autoencoder.fit(normal_train_data, normal_train_data,\n",
774+
" epochs=20,\n",
768775
" batch_size=512,\n",
769776
" validation_data=(test_data, test_data),\n",
770777
" shuffle=True)"
@@ -908,7 +915,7 @@
908915
"id": "uEGlA1Be50Nj"
909916
},
910917
"source": [
911-
"Note: There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial. "
918+
"Note: There are other strategies you could use to select a threshold value above which test examples should be classified as anomalous, the correct approach will depend on your dataset. You can learn more with the links at the end of this tutorial."
912919
]
913920
},
914921
{
@@ -917,7 +924,7 @@
917924
"id": "zpLSDAeb51D_"
918925
},
919926
"source": [
920-
"If you examine the reconstruction error for the anomalous examples in the test set, you'll notice most have greater reconstruction error than the threshold. By varing the threshold, you can adjust the [precision](https://developers.google.com/machine-learning/glossary#precision) and [recall](https://developers.google.com/machine-learning/glossary#recall) of your classifier. "
927+
"If you examine the reconstruction error for the anomalous examples in the test set, you'll notice most have greater reconstruction error than the threshold. By varing the threshold, you can adjust the [precision](https://developers.google.com/machine-learning/glossary#precision) and [recall](https://developers.google.com/machine-learning/glossary#recall) of your classifier."
921928
]
922929
},
923930
{
@@ -992,8 +999,18 @@
992999
"metadata": {
9931000
"accelerator": "GPU",
9941001
"colab": {
995-
"collapsed_sections": [],
996-
"name": "autoencoder.ipynb",
1002+
"gpuType": "T4",
1003+
"private_outputs": true,
1004+
"provenance": [
1005+
{
1006+
"file_id": "17gKB2bKebV2DzoYIMFzyEXA5uDnwWOvT",
1007+
"timestamp": 1712793165979
1008+
},
1009+
{
1010+
"file_id": "https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb",
1011+
"timestamp": 1712792176273
1012+
}
1013+
],
9971014
"toc_visible": true
9981015
},
9991016
"kernelspec": {

0 commit comments

Comments
 (0)