Skip to content

Commit 6d60a98

Browse files
Fix broken links
PiperOrigin-RevId: 446057312
1 parent 21b2a4a commit 6d60a98

File tree

2 files changed

+7
-18
lines changed

2 files changed

+7
-18
lines changed

site/en/guide/dtensor_overview.ipynb

Lines changed: 6 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
"cell_type": "code",
1414
"execution_count": null,
1515
"metadata": {
16+
"cellView": "form",
1617
"id": "tuOe1ymfHZPu"
1718
},
1819
"outputs": [],
@@ -67,7 +68,6 @@
6768
"id": "MGZuakHVlVQf"
6869
},
6970
"source": [
70-
"\n",
7171
"## Overview\n",
7272
"\n",
7373
"This colab introduces DTensor, an extension to TensorFlow for synchronous distributed computing.\n",
@@ -76,7 +76,7 @@
7676
"\n",
7777
"By decoupling the application from sharding directives, DTensor enables running the same application on a single device, multiple devices, or even multiple clients, while preserving its global semantics. \n",
7878
"\n",
79-
"This guide introduces DTensor concepts for distributed computing, and how DTensor integrates with TensorFlow. To see a demo of using DTensor in model training, see [Distributed training with DTensor](https://www.tensorflow.org/tutorials/distribute/dtensor_ml_tutorial.ipynb) tutorial."
79+
"This guide introduces DTensor concepts for distributed computing, and how DTensor integrates with TensorFlow. To see a demo of using DTensor in model training, see [Distributed training with DTensor](https://www.tensorflow.org/tutorials/distribute/dtensor_ml_tutorial) tutorial."
8080
]
8181
},
8282
{
@@ -157,7 +157,6 @@
157157
"id": "JjiHaH0ql9yo"
158158
},
159159
"source": [
160-
"\n",
161160
"### Mesh\n",
162161
"\n",
163162
"`Mesh` represents a logical Cartisian topology of a set of devices. Each dimension of the Cartisian grid is called a **Mesh dimension**, and referred to with a name. Names of mesh dimension within the same `Mesh` must be unique.\n",
@@ -173,7 +172,6 @@
173172
"id": "_J6cOieEbaUw"
174173
},
175174
"source": [
176-
"\n",
177175
"In a 1 dimensional `Mesh`, all devices form a list in a single mesh dimension. The following example uses `dtensor.create_mesh` to create a mesh from 6 CPU devices along a mesh dimension `'x'` with a size of 6 devices:\n",
178176
"\n",
179177
"<img src=\"https://www.tensorflow.org/guide/images/dtensor_mesh_1d.png\" alt=\"A 1 dimensional mesh with 6 CPUs\" class=\"no-filter\">\n"
@@ -250,7 +248,6 @@
250248
"id": "fqzCNlWAbm-c"
251249
},
252250
"source": [
253-
"\n",
254251
"On a 1-dimensional mesh such as `[(\"x\", 6)]` (`mesh_1d` in the previous section), `Layout([\"unsharded\"], mesh_1d)` is a layout for a rank-1 tensor replicated on 6 devices.\n",
255252
"\n",
256253
"<img src=\"https://www.tensorflow.org/guide/images/dtensor_layout_rank1.png\" alt=\"Layout for a rank-1 tensor\" class=\"no-filter\">"
@@ -308,8 +305,7 @@
308305
"During `Mesh` creation, each client provides its *local device list* together with the expected *global device list*. DTensor validates that both lists are consistent. Please refer to the API documentation for `dtensor.create_mesh` and `dtensor.create_distributed_mesh`\n",
309306
" for more information on multi-client mesh creation and the *global device list*.\n",
310307
"\n",
311-
"Single-client can be thought of as a special case of multi-client, with 1 client. In a single-client application, the *global device list* is identical to the *local device list*.\n",
312-
"\n"
308+
"Single-client can be thought of as a special case of multi-client, with 1 client. In a single-client application, the *global device list* is identical to the *local device list*.\n"
313309
]
314310
},
315311
{
@@ -454,8 +450,7 @@
454450
"source": [
455451
"The inverse operation of `dtensor.unpack` is `dtensor.pack`. Component tensors can be packed back into a DTensor.\n",
456452
"\n",
457-
"The components must have the same rank and dtype, which will be the rank and dtype of the returned DTensor. However there is no strict requirement on the device placement of component tensors as inputs of `dtensor.unpack`: the function will automatically copy the component tensors to their respective corresponding devices. \n",
458-
"\n"
453+
"The components must have the same rank and dtype, which will be the rank and dtype of the returned DTensor. However there is no strict requirement on the device placement of component tensors as inputs of `dtensor.unpack`: the function will automatically copy the component tensors to their respective corresponding devices. \n"
459454
]
460455
},
461456
{
@@ -601,7 +596,6 @@
601596
"id": "T7FtZ9kQRZgE"
602597
},
603598
"source": [
604-
"\n",
605599
"You can inspect the component tensors of the created DTensor and verify they are indeed sharded according to your scheme. It may be helpful to illustrate the situation with a chart:\n",
606600
"\n",
607601
" <img src=\"https://www.tensorflow.org/guide/images/dtensor_hybrid_mesh.png\" alt=\"A 3x2 hybrid mesh with 6 CPUs\"\n",
@@ -712,8 +706,7 @@
712706
"print('Sharding spec:', dtensor.fetch_layout(c).sharding_specs)\n",
713707
"print(\"components:\")\n",
714708
"for component_tensor in dtensor.unpack(c):\n",
715-
" print(component_tensor.device, component_tensor.numpy())\n",
716-
"\n"
709+
" print(component_tensor.device, component_tensor.numpy())\n"
717710
]
718711
},
719712
{
@@ -1039,23 +1032,19 @@
10391032
"source": [
10401033
"## What's next?\n",
10411034
"\n",
1042-
"In this colab, you learned about DTensor, an extension to TensorFlow for distributed computing. To try out these concepts in a tutorial, see [Distributed training with DTensor](https://www.tensorflow.org/tutorials/distribute/dtensor_ml_tutorial.ipynb)."
1035+
"In this colab, you learned about DTensor, an extension to TensorFlow for distributed computing. To try out these concepts in a tutorial, see [Distributed training with DTensor](https://www.tensorflow.org/tutorials/distribute/dtensor_ml_tutorial)."
10431036
]
10441037
}
10451038
],
10461039
"metadata": {
10471040
"colab": {
10481041
"collapsed_sections": [],
10491042
"name": "dtensor_overview.ipynb",
1050-
"provenance": [],
10511043
"toc_visible": true
10521044
},
10531045
"kernelspec": {
10541046
"display_name": "Python 3",
10551047
"name": "python3"
1056-
},
1057-
"language_info": {
1058-
"name": "python"
10591048
}
10601049
},
10611050
"nbformat": 4,

site/en/tutorials/distribute/dtensor_ml_tutorial.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1063,7 +1063,7 @@
10631063
"In a real-world machine learning application, evaluation and cross-validation should be applied to avoid producing an over-fitted model. The techniques introduced in this tutorial can also be applied to introduce parallelism to evaluation.\n",
10641064
"\n",
10651065
"Composing a model with `tf.Module` from scratch is a lot of work, and reusing existing building blocks such as layers and helper functions can drastically speed up model development.\n",
1066-
"As of TensorFlow 2.9, all Keras Layers under `tf.keras.layers` accepts DTensor layouts as their arguments, and can be used to build DTensor models. You can even directly reuse a Keras model with DTensor without modifying the model implementation. Refer to the [DTensor Keras Integration Tutorial](link) (TODO: add link) for information on using DTensor Keras. "
1066+
"As of TensorFlow 2.9, all Keras Layers under `tf.keras.layers` accepts DTensor layouts as their arguments, and can be used to build DTensor models. You can even directly reuse a Keras model with DTensor without modifying the model implementation. Refer to the [DTensor Keras Integration Tutorial](https://www.tensorflow.org/tutorials/distribute/dtensor_keras_tutorial) for information on using DTensor Keras. "
10671067
]
10681068
}
10691069
],

0 commit comments

Comments
 (0)