@@ -119,7 +119,7 @@ sprintf("Features std: %.2f", sd(normalized_data))
119119
120120
121121` adapt() ` takes either an array or a
122- ` tf.data.Dataset ` . In the case of ` layer_string_lookup() ` and
122+ ` tf_dataset ` . In the case of ` layer_string_lookup() ` and
123123` layer_text_vectorization() ` , you can also pass a character vector:
124124
125125
@@ -186,7 +186,7 @@ If you're training on GPU, this is the best option for the
186186` layer_normalization() ` layer, and for all image preprocessing and data
187187augmentation layers.
188188
189- ** Option 2:** apply it to your ` tf.data.Dataset ` , so as to obtain a dataset that yields
189+ ** Option 2:** apply it to your ` tf_dataset ` , so as to obtain a dataset that yields
190190batches of preprocessed data, like this:
191191
192192``` {r, eval = FALSE}
@@ -230,7 +230,7 @@ or to `[0, 1]`, etc. This is especially powerful if you're exporting your model
230230to another runtime, such as TensorFlow.js: you won't have to reimplement your
231231preprocessing pipeline in JavaScript.
232232
233- If you initially put your preprocessing layers in your ` tf.data ` pipeline,
233+ If you initially put your preprocessing layers in your ` tf_dataset ` pipeline,
234234you can export an inference model that packages the preprocessing.
235235Simply instantiate a new model that chains
236236your preprocessing layers and your training model:
@@ -268,7 +268,7 @@ c(c(x_train, y_train), ...) %<-% dataset_cifar10()
268268input_shape <- dim(x_train)[-1] # drop batch dim
269269classes <- 10
270270
271- # Create a tf.data pipeline of augmented images (and their labels)
271+ # Create a tf_dataset pipeline of augmented images (and their labels)
272272train_dataset <- tensor_slices_dataset(list(x_train, y_train)) %>%
273273 dataset_batch(16) %>%
274274 dataset_map( ~ list(data_augmentation(.x), .y)) # see ?purrr::map to learn about ~ notation
0 commit comments