|
102 | 102 | "np.set_printoptions(precision=3, suppress=True)\n",
|
103 | 103 | "\n",
|
104 | 104 | "import tensorflow as tf\n",
|
105 |
| - "from tensorflow.keras import layers\n", |
106 |
| - "from tensorflow.keras.layers.experimental import preprocessing" |
| 105 | + "from tensorflow.keras import layers" |
107 | 106 | ]
|
108 | 107 | },
|
109 | 108 | {
|
|
279 | 278 | "id": "yCrB2Jd-U0Vt"
|
280 | 279 | },
|
281 | 280 | "source": [
|
282 |
| - "It's good practice to normalize the inputs to your model. The `experimental.preprocessing` layers provide a convenient way to build this normalization into your model. \n", |
| 281 | + "It's good practice to normalize the inputs to your model. The Keras preprocessing layers provide a convenient way to build this normalization into your model. \n", |
283 | 282 | "\n",
|
284 | 283 | "The layer will precompute the mean and variance of each column, and use these to normalize the data.\n",
|
285 | 284 | "\n",
|
|
294 | 293 | },
|
295 | 294 | "outputs": [],
|
296 | 295 | "source": [
|
297 |
| - "normalize = preprocessing.Normalization()" |
| 296 | + "normalize = layers.Normalization()" |
298 | 297 | ]
|
299 | 298 | },
|
300 | 299 | {
|
|
397 | 396 | "source": [
|
398 | 397 | "Because of the different data types and ranges you can't simply stack the features into NumPy array and pass it to a `keras.Sequential` model. Each column needs to be handled individually. \n",
|
399 | 398 | "\n",
|
400 |
| - "As one option, you could preprocess your data offline (using any tool you like) to convert categorical columns to numeric columns, then pass the processed output to your TensorFlow model. The disadvantage to that approach is that if you save and export your model the preprocessing is not saved with it. The `experimental.preprocessing` layers avoid this problem because they're part of the model.\n" |
| 399 | + "As one option, you could preprocess your data offline (using any tool you like) to convert categorical columns to numeric columns, then pass the processed output to your TensorFlow model. The disadvantage to that approach is that if you save and export your model the preprocessing is not saved with it. The Keras preprocessing layers avoid this problem because they're part of the model.\n" |
401 | 400 | ]
|
402 | 401 | },
|
403 | 402 | {
|
|
504 | 503 | " if input.dtype==tf.float32}\n",
|
505 | 504 | "\n",
|
506 | 505 | "x = layers.Concatenate()(list(numeric_inputs.values()))\n",
|
507 |
| - "norm = preprocessing.Normalization()\n", |
| 506 | + "norm = layers.Normalization()\n", |
508 | 507 | "norm.adapt(np.array(titanic[numeric_inputs.keys()]))\n",
|
509 | 508 | "all_numeric_inputs = norm(x)\n",
|
510 | 509 | "\n",
|
|
537 | 536 | "id": "r0Hryylyosfm"
|
538 | 537 | },
|
539 | 538 | "source": [
|
540 |
| - "For the string inputs use the `preprocessing.StringLookup` function to map from strings to integer indices in a vocabulary. Next, use `preprocessing.CategoryEncoding` to convert the indexes into `float32` data appropriate for the model. \n", |
| 539 | + "For the string inputs use the `tf.keras.layers.StringLookup` function to map from strings to integer indices in a vocabulary. Next, use `tf.keras.layers.CategoryEncoding` to convert the indexes into `float32` data appropriate for the model. \n", |
541 | 540 | "\n",
|
542 |
| - "The default settings for the `preprocessing.CategoryEncoding` layer create a one-hot vector for each input. A `layers.Embedding` would also work. See the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](../structured_data/preprocessing_layers.ipynb) for more on this topic." |
| 541 | + "The default settings for the `tf.keras.layers.CategoryEncoding` layer create a one-hot vector for each input. A `layers.Embedding` would also work. See the [preprocessing layers guide](https://www.tensorflow.org/guide/keras/preprocessing_layers#quick_recipes) and [tutorial](../structured_data/preprocessing_layers.ipynb) for more on this topic." |
543 | 542 | ]
|
544 | 543 | },
|
545 | 544 | {
|
|
554 | 553 | " if input.dtype == tf.float32:\n",
|
555 | 554 | " continue\n",
|
556 | 555 | " \n",
|
557 |
| - " lookup = preprocessing.StringLookup(vocabulary=np.unique(titanic_features[name]))\n", |
558 |
| - " one_hot = preprocessing.CategoryEncoding(max_tokens=lookup.vocab_size())\n", |
| 556 | + " lookup = layers.StringLookup(vocabulary=np.unique(titanic_features[name]))\n", |
| 557 | + " one_hot = layers.CategoryEncoding(max_tokens=lookup.vocab_size())\n", |
559 | 558 | "\n",
|
560 | 559 | " x = lookup(input)\n",
|
561 | 560 | " x = one_hot(x)\n",
|
|
0 commit comments