Skip to content

Commit ad8238b

Browse files
Merge pull request #2071 from chunduriv:patch-7
PiperOrigin-RevId: 449082127
2 parents 0d7d7c6 + a2d07f3 commit ad8238b

File tree

1 file changed

+20
-21
lines changed

1 file changed

+20
-21
lines changed

site/en/guide/sparse_tensor.ipynb

Lines changed: 20 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -79,17 +79,17 @@
7979
"source": [
8080
"## Sparse tensors in TensorFlow\n",
8181
"\n",
82-
"TensorFlow represents sparse tensors through the `tf.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.\n",
82+
"TensorFlow represents sparse tensors through the `tf.sparse.SparseTensor` object. Currently, sparse tensors in TensorFlow are encoded using the coordinate list (COO) format. This encoding format is optimized for hyper-sparse matrices such as embeddings.\n",
8383
"\n",
8484
"The COO encoding for sparse tensors is comprised of:\n",
8585
"\n",
8686
" * `values`: A 1D tensor with shape `[N]` containing all nonzero values.\n",
8787
" * `indices`: A 2D tensor with shape `[N, rank]`, containing the indices of the nonzero values.\n",
8888
" * `dense_shape`: A 1D tensor with shape `[rank]`, specifying the shape of the tensor.\n",
8989
"\n",
90-
"A ***nonzero*** value in the context of a `tf.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these \"explicit zeros\" are generally not included when referring to nonzero values in a sparse tensor.\n",
90+
"A ***nonzero*** value in the context of a `tf.sparse.SparseTensor` is a value that's not explicitly encoded. It is possible to explicitly include zero values in the `values` of a COO sparse matrix, but these \"explicit zeros\" are generally not included when referring to nonzero values in a sparse tensor.\n",
9191
"\n",
92-
"Note: `tf.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order. "
92+
"Note: `tf.sparse.SparseTensor` does not require that indices/values be in any particular order, but several ops assume that they're in row-major order. Use `tf.sparse.reorder` to create a copy of the sparse tensor that is sorted in the canonical row-major order. "
9393
]
9494
},
9595
{
@@ -98,7 +98,7 @@
9898
"id": "6Aq7ruwlyz79"
9999
},
100100
"source": [
101-
"## Creating a `tf.SparseTensor`\n",
101+
"## Creating a `tf.sparse.SparseTensor`\n",
102102
"\n",
103103
"Construct sparse tensors by directly specifying their `values`, `indices`, and `dense_shape`."
104104
]
@@ -122,7 +122,7 @@
122122
},
123123
"outputs": [],
124124
"source": [
125-
"st1 = tf.SparseTensor(indices=[[0, 3], [2, 4]],\n",
125+
"st1 = tf.sparse.SparseTensor(indices=[[0, 3], [2, 4]],\n",
126126
" values=[10, 20],\n",
127127
" dense_shape=[3, 10])"
128128
]
@@ -252,11 +252,11 @@
252252
},
253253
"outputs": [],
254254
"source": [
255-
"st_a = tf.SparseTensor(indices=[[0, 2], [3, 4]],\n",
255+
"st_a = tf.sparse.SparseTensor(indices=[[0, 2], [3, 4]],\n",
256256
" values=[31, 2], \n",
257257
" dense_shape=[4, 10])\n",
258258
"\n",
259-
"st_b = tf.SparseTensor(indices=[[0, 2], [7, 0]],\n",
259+
"st_b = tf.sparse.SparseTensor(indices=[[0, 2], [7, 0]],\n",
260260
" values=[56, 38],\n",
261261
" dense_shape=[4, 10])\n",
262262
"\n",
@@ -282,7 +282,7 @@
282282
},
283283
"outputs": [],
284284
"source": [
285-
"st_c = tf.SparseTensor(indices=([0, 1], [1, 0], [1, 1]),\n",
285+
"st_c = tf.sparse.SparseTensor(indices=([0, 1], [1, 0], [1, 1]),\n",
286286
" values=[13, 15, 17],\n",
287287
" dense_shape=(2,2))\n",
288288
"\n",
@@ -309,14 +309,14 @@
309309
},
310310
"outputs": [],
311311
"source": [
312-
"sparse_pattern_A = tf.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]],\n",
312+
"sparse_pattern_A = tf.sparse.SparseTensor(indices = [[2,4], [3,3], [3,4], [4,3], [4,4], [5,4]],\n",
313313
" values = [1,1,1,1,1,1],\n",
314314
" dense_shape = [8,5])\n",
315-
"sparse_pattern_B = tf.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5], \n",
315+
"sparse_pattern_B = tf.sparse.SparseTensor(indices = [[0,2], [1,1], [1,3], [2,0], [2,4], [2,5], [3,5], \n",
316316
" [4,5], [5,0], [5,4], [5,5], [6,1], [6,3], [7,2]],\n",
317317
" values = [1,1,1,1,1,1,1,1,1,1,1,1,1,1],\n",
318318
" dense_shape = [8,6])\n",
319-
"sparse_pattern_C = tf.SparseTensor(indices = [[3,0], [4,0]],\n",
319+
"sparse_pattern_C = tf.sparse.SparseTensor(indices = [[3,0], [4,0]],\n",
320320
" values = [1,1],\n",
321321
" dense_shape = [8,6])\n",
322322
"\n",
@@ -381,7 +381,7 @@
381381
},
382382
"outputs": [],
383383
"source": [
384-
"st2_plus_5 = tf.SparseTensor(\n",
384+
"st2_plus_5 = tf.sparse.SparseTensor(\n",
385385
" st2.indices,\n",
386386
" st2.values + 5,\n",
387387
" st2.dense_shape)\n",
@@ -394,7 +394,7 @@
394394
"id": "GFhO2ZZ53ga1"
395395
},
396396
"source": [
397-
"## Using `tf.SparseTensor` with other TensorFlow APIs\n",
397+
"## Using `tf.sparse.SparseTensor` with other TensorFlow APIs\n",
398398
"\n",
399399
"Sparse tensors work transparently with these TensorFlow APIs:\n",
400400
"\n",
@@ -449,7 +449,7 @@
449449
"y = tf.keras.layers.Dense(4)(x)\n",
450450
"model = tf.keras.Model(x, y)\n",
451451
"\n",
452-
"sparse_data = tf.SparseTensor(\n",
452+
"sparse_data = tf.sparse.SparseTensor(\n",
453453
" indices = [(0,0),(0,1),(0,2),\n",
454454
" (4,3),(5,0),(5,1)],\n",
455455
" values = [1,1,1,1,1,1],\n",
@@ -569,9 +569,9 @@
569569
"\n",
570570
"`tf.train.Example` is a standard protobuf encoding for TensorFlow data. When using sparse tensors with `tf.train.Example`, you can:\n",
571571
"\n",
572-
"* Read variable-length data into a `tf.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.\n",
572+
"* Read variable-length data into a `tf.sparse.SparseTensor` using `tf.io.VarLenFeature`. However, you should consider using `tf.io.RaggedFeature` instead.\n",
573573
"\n",
574-
"* Read arbitrary sparse data into a `tf.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`."
574+
"* Read arbitrary sparse data into a `tf.sparse.SparseTensor` using `tf.io.SparseFeature`, which uses three separate feature keys to store the `indices`, `values`, and `dense_shape`."
575575
]
576576
},
577577
{
@@ -597,7 +597,7 @@
597597
"def f(x,y):\n",
598598
" return tf.sparse.sparse_dense_matmul(x,y)\n",
599599
"\n",
600-
"a = tf.SparseTensor(indices=[[0, 3], [2, 4]],\n",
600+
"a = tf.sparse.SparseTensor(indices=[[0, 3], [2, 4]],\n",
601601
" values=[15, 25],\n",
602602
" dense_shape=[3, 10])\n",
603603
"\n",
@@ -616,11 +616,11 @@
616616
"source": [
617617
"## Distinguishing missing values from zero values\n",
618618
"\n",
619-
"Most ops on `tf.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.SparseTensor` is supposed to act just like a dense tensor.\n",
619+
"Most ops on `tf.sparse.SparseTensor`s treat missing values and explicit zero values identically. This is by design — a `tf.sparse.SparseTensor` is supposed to act just like a dense tensor.\n",
620620
"\n",
621621
"However, there are a few cases where it can be useful to distinguish zero values from missing values. In particular, this allows for one way to encode missing/unknown data in your training data. For example, consider a use case where you have a tensor of scores (that can have any floating point value from -Inf to +Inf), with some missing scores. You can encode this tensor using a sparse tensor where the explicit zeros are known zero scores but the implicit zero values actually represent missing data and not zero. \n",
622622
"\n",
623-
"Note: This is generally not the intended usage of `tf.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
623+
"Note: This is generally not the intended usage of `tf.sparse.SparseTensor`s; and you might want to also consier other techniques for encoding this such as for example using a separate mask tensor that identifies the locations of known/unknown values. However, exercise caution while using this approach, since most sparse operations will treat explicit and implicit zero values identically."
624624
]
625625
},
626626
{
@@ -680,8 +680,7 @@
680680
"metadata": {
681681
"colab": {
682682
"collapsed_sections": [],
683-
"name": "sparse_tensor_guide.ipynb",
684-
"provenance": [],
683+
"name": "sparse_tensor.ipynb",
685684
"toc_visible": true
686685
},
687686
"kernelspec": {

0 commit comments

Comments
 (0)