Skip to content

Commit cf80305

Browse files
Minor changes to improve terminology consistency (DTensor colab notebooks)
PiperOrigin-RevId: 446005959
1 parent dc63f4e commit cf80305

File tree

2 files changed

+8
-39
lines changed

2 files changed

+8
-39
lines changed

site/en/tutorials/distribute/dtensor_keras_tutorial.ipynb

Lines changed: 2 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
"cell_type": "code",
1414
"execution_count": null,
1515
"metadata": {
16+
"cellView": "form",
1617
"id": "tuOe1ymfHZPu"
1718
},
1819
"outputs": [],
@@ -36,7 +37,7 @@
3637
"id": "MT-LkFOl2axM"
3738
},
3839
"source": [
39-
"# DTensor Integration with Keras"
40+
"# Using DTensors with Keras"
4041
]
4142
},
4243
{
@@ -739,32 +740,17 @@
739740
"\n",
740741
"print(model.layers[2].kernel.layout)"
741742
]
742-
},
743-
{
744-
"cell_type": "code",
745-
"execution_count": null,
746-
"metadata": {
747-
"id": "00dPVoSlRLFA"
748-
},
749-
"outputs": [],
750-
"source": [
751-
""
752-
]
753743
}
754744
],
755745
"metadata": {
756746
"colab": {
757747
"collapsed_sections": [],
758748
"name": "dtensor_keras_tutorial.ipynb",
759-
"provenance": [],
760749
"toc_visible": true
761750
},
762751
"kernelspec": {
763752
"display_name": "Python 3",
764753
"name": "python3"
765-
},
766-
"language_info": {
767-
"name": "python"
768754
}
769755
},
770756
"nbformat": 4,

site/en/tutorials/distribute/dtensor_ml_tutorial.ipynb

Lines changed: 6 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
"cell_type": "code",
1414
"execution_count": null,
1515
"metadata": {
16+
"cellView": "form",
1617
"id": "tuOe1ymfHZPu"
1718
},
1819
"outputs": [],
@@ -36,7 +37,7 @@
3637
"id": "MfBg1C5NB3X0"
3738
},
3839
"source": [
39-
"# DTensor Maching Learning Tutorial\n"
40+
"# Distributed Training with DTensors\n"
4041
]
4142
},
4243
{
@@ -75,7 +76,7 @@
7576
" \n",
7677
" - Data Parallel training, where the training samples are sharded (partitioned) to devices.\n",
7778
" - Model Parallel training, where the model variables are sharded to devices. \n",
78-
" - Spatial Parallel training, where the features of input data are sharded to devices.\n",
79+
" - Spatial Parallel training, where the features of input data are sharded to devices. (Also known as [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus))\n",
7980
"\n",
8081
"The training portion of this tutorial is inspired [A Kaggle guide on Sentiment Analysis](https://www.kaggle.com/code/anasofiauzsoy/yelp-review-sentiment-analysis-tensorflow-tfds/notebook) notebook. To learn about the complete training and evaluation workflow (without DTensor), refer to that notebook. \n",
8182
"\n",
@@ -237,8 +238,7 @@
237238
" 'y': dataset_y,\n",
238239
"})\n",
239240
"\n",
240-
"dataset.take(1).get_single_element()\n",
241-
"\n"
241+
"dataset.take(1).get_single_element()\n"
242242
]
243243
},
244244
{
@@ -297,7 +297,6 @@
297297
"id": "PMCt-Gj3b3Jy"
298298
},
299299
"source": [
300-
"\n",
301300
"### Dense Layer\n",
302301
"\n",
303302
"The following custom Dense layer defines 2 layer variables: $W_{ij}$ is the variable for weights, and $b_i$ is the variable for the biases.\n",
@@ -809,8 +808,7 @@
809808
"- The 2 devices within a single model replica receive replicated training data.\n",
810809
"\n",
811810
"\n",
812-
"<img src=\"https://www.tensorflow.org/tutorials/distribute/images/dtensor_model_para.png\" alt=\"Model parallel mesh\" class=\"no-filter\">\n",
813-
"\n"
811+
"<img src=\"https://www.tensorflow.org/tutorials/distribute/images/dtensor_model_para.png\" alt=\"Model parallel mesh\" class=\"no-filter\">\n"
814812
]
815813
},
816814
{
@@ -905,7 +903,7 @@
905903
"id": "u-bK6IZ9GCS9"
906904
},
907905
"source": [
908-
"When training data of very high dimensionality (e.g. a very large image or a video), it may be desirable to shard along the feature dimension. This is called Spatial Parallel training.\n",
906+
"When training data of very high dimensionality (e.g. a very large image or a video), it may be desirable to shard along the feature dimension. This is called [Spatial Partitioning](https://cloud.google.com/blog/products/ai-machine-learning/train-ml-models-on-large-images-and-3d-volumes-with-spatial-partitioning-on-cloud-tpus), which was first introduced into TensorFlow for training models with large 3-d input samples.\n",
909907
"\n",
910908
"<img src=\"https://www.tensorflow.org/tutorials/distribute/images/dtensor_spatial_para.png\" alt=\"Spatial parallel mesh\" class=\"no-filter\">\n",
911909
"\n",
@@ -1067,32 +1065,17 @@
10671065
"Composing a model with `tf.Module` from scratch is a lot of work, and reusing existing building blocks such as layers and helper functions can drastically speed up model development.\n",
10681066
"As of TensorFlow 2.9, all Keras Layers under `tf.keras.layers` accepts DTensor layouts as their arguments, and can be used to build DTensor models. You can even directly reuse a Keras model with DTensor without modifying the model implementation. Refer to the [DTensor Keras Integration Tutorial](link) (TODO: add link) for information on using DTensor Keras. "
10691067
]
1070-
},
1071-
{
1072-
"cell_type": "code",
1073-
"execution_count": null,
1074-
"metadata": {
1075-
"id": "A-YWPfJyHPcX"
1076-
},
1077-
"outputs": [],
1078-
"source": [
1079-
""
1080-
]
10811068
}
10821069
],
10831070
"metadata": {
10841071
"colab": {
10851072
"collapsed_sections": [],
10861073
"name": "dtensor_ml_tutorial.ipynb",
1087-
"provenance": [],
10881074
"toc_visible": true
10891075
},
10901076
"kernelspec": {
10911077
"display_name": "Python 3",
10921078
"name": "python3"
1093-
},
1094-
"language_info": {
1095-
"name": "python"
10961079
}
10971080
},
10981081
"nbformat": 4,

0 commit comments

Comments
 (0)