Skip to content

Commit 769cbfc

Browse files
authored
Updated load data docs with new api
Updated load data docs with new api. Replaced tf.data.experimental.snapshot with tf.data.Dataset.snapshot
1 parent e41fd5c commit 769cbfc

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

site/en/tutorials/load_data/csv.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1066,7 +1066,7 @@
10661066
"source": [
10671067
"There is some overhead to parsing the CSV data. For small models this can be the bottleneck in training.\n",
10681068
"\n",
1069-
"Depending on your use case, it may be a good idea to use `Dataset.cache` or `tf.data.experimental.snapshot`, so that the CSV data is only parsed on the first epoch.\n",
1069+
"Depending on your use case, it may be a good idea to use `Dataset.cache` or `tf.data.Dataset.snapshot`, so that the CSV data is only parsed on the first epoch.\n",
10701070
"\n",
10711071
"The main difference between the `cache` and `snapshot` methods is that `cache` files can only be used by the TensorFlow process that created them, but `snapshot` files can be read by other processes.\n",
10721072
"\n",
@@ -1120,7 +1120,7 @@
11201120
"id": "wN7uUBjmgNZ9"
11211121
},
11221122
"source": [
1123-
"Note: The `tf.data.experimental.snapshot` files are meant for *temporary* storage of a dataset while in use. This is *not* a format for long term storage. The file format is considered an internal detail, and not guaranteed between TensorFlow versions."
1123+
"Note: The `tf.data.Dataset.snapshot` files are meant for *temporary* storage of a dataset while in use. This is *not* a format for long term storage. The file format is considered an internal detail, and not guaranteed between TensorFlow versions."
11241124
]
11251125
},
11261126
{
@@ -1132,7 +1132,7 @@
11321132
"outputs": [],
11331133
"source": [
11341134
"%%time\n",
1135-
"snapshot = tf.data.experimental.snapshot('titanic.tfsnap')\n",
1135+
"snapshot = tf.data.Dataset.snapshot('titanic.tfsnap')\n",
11361136
"snapshotting = traffic_volume_csv_gz_ds.apply(snapshot).shuffle(1000)\n",
11371137
"\n",
11381138
"for i, (batch, label) in enumerate(snapshotting.shuffle(1000).repeat(20)):\n",
@@ -1147,7 +1147,7 @@
11471147
"id": "fUSSegnMCGRz"
11481148
},
11491149
"source": [
1150-
"If your data loading is slowed by loading CSV files, and `Dataset.cache` and `tf.data.experimental.snapshot` are insufficient for your use case, consider re-encoding your data into a more streamlined format."
1150+
"If your data loading is slowed by loading CSV files, and `Dataset.cache` and `tf.data.Dataset.snapshot` are insufficient for your use case, consider re-encoding your data into a more streamlined format."
11511151
]
11521152
},
11531153
{
@@ -1862,7 +1862,7 @@
18621862
"source": [
18631863
"For another example of increasing CSV performance by using large batches, refer to the [Overfit and underfit tutorial](../keras/overfit_and_underfit.ipynb).\n",
18641864
"\n",
1865-
"This sort of approach may work, but consider other options like `Dataset.cache` and `tf.data.experimental.snapshot`, or re-encoding your data into a more streamlined format."
1865+
"This sort of approach may work, but consider other options like `Dataset.cache` and `tf.data.Dataset.snapshot`, or re-encoding your data into a more streamlined format."
18661866
]
18671867
}
18681868
],

0 commit comments

Comments
 (0)