Skip to content

Commit 365ea20

Browse files
authored
Merge pull request #3817 from lamberta/fix-notebook
Remove code formatted links that don't render in correctly.
2 parents 4e71c06 + 124c5d7 commit 365ea20

File tree

1 file changed

+8
-8
lines changed

1 file changed

+8
-8
lines changed

samples/core/get_started/eager.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -219,7 +219,7 @@
219219
"\n",
220220
"### Download the dataset\n",
221221
"\n",
222-
"Download the training dataset file using the `[tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file)` function. This returns the file path of the downloaded file."
222+
"Download the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file."
223223
]
224224
},
225225
{
@@ -347,7 +347,7 @@
347347
"\n",
348348
"TensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for feeding data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.\n",
349349
"\n",
350-
"This program uses `[tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset)` to load a CSV-formatted text file and is parsed with our `parse_csv` function. A `[tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset)` represents an input pipeline as a collection of elements and a series of transformations that act on those elements. Transformation methods are chained together or called sequentially—just make sure to keep a reference to the returned `Dataset` object.\n",
350+
"This program uses [tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) to load a CSV-formatted text file and is parsed with our `parse_csv` function. A [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) represents an input pipeline as a collection of elements and a series of transformations that act on those elements. Transformation methods are chained together or called sequentially—just make sure to keep a reference to the returned `Dataset` object.\n",
351351
"\n",
352352
"Training works best if the examples are in random order. Use `tf.data.Dataset.shuffle` to randomize entries, setting `buffer_size` to a value larger than the number of examples (120 in this case). To train the model faster, the dataset's *[batch size](https://developers.google.com/machine-learning/glossary/#batch_size)* is set to `32` examples to train at once."
353353
]
@@ -418,9 +418,9 @@
418418
"source": [
419419
"### Create a model using Keras\n",
420420
"\n",
421-
"The TensorFlow `[tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras)` API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together. See the [Keras documentation](https://keras.io/) for details.\n",
421+
"The TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together. See the [Keras documentation](https://keras.io/) for details.\n",
422422
"\n",
423-
"The `[tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential)` model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two `[Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)` layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the amount of features from the dataset, and is required."
423+
"The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the amount of features from the dataset, and is required."
424424
]
425425
},
426426
{
@@ -482,7 +482,7 @@
482482
"\n",
483483
"Both training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossary#loss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.\n",
484484
"\n",
485-
"Our model will calculate its loss using the `[tf.losses.sparse_softmax_cross_entropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy)` function which takes the model's prediction and the desired label. The returned loss value is progressively larger as the prediction gets worse."
485+
"Our model will calculate its loss using the [tf.losses.sparse_softmax_cross_entropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's prediction and the desired label. The returned loss value is progressively larger as the prediction gets worse."
486486
]
487487
},
488488
{
@@ -518,7 +518,7 @@
518518
},
519519
"cell_type": "markdown",
520520
"source": [
521-
"The `grad` function uses the `loss` function and the `[tfe.GradientTape](https://www.tensorflow.org/api_docs/python/tf/contrib/eager/GradientTape)` to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
521+
"The `grad` function uses the `loss` function and the [tfe.GradientTape](https://www.tensorflow.org/api_docs/python/tf/contrib/eager/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossary#gradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager)."
522522
]
523523
},
524524
{
@@ -539,7 +539,7 @@
539539
" </figcaption>\n",
540540
"</figure>\n",
541541
"\n",
542-
"TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the `[tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer)` that implements the *[standard gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results."
542+
"TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[standard gradient descent](https://developers.google.com/machine-learning/crash-course/glossary#gradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results."
543543
]
544544
},
545545
{
@@ -864,4 +864,4 @@
864864
]
865865
}
866866
]
867-
}
867+
}

0 commit comments

Comments
 (0)