Skip to content

Commit 8e9f80a

Browse files
A few typos were updated in documentation.
Reference respective Github PR's. #10955 #10928 #10892 #11100 #11107 #11123 #11120 #11118 #11111 #11144 PiperOrigin-RevId: 604865440
1 parent e3dbeaf commit 8e9f80a

14 files changed

+44
-44
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Public docs for TensorFlow Models
22

33
This directory contains the top-level public documentation for
4-
[TensorFlow Models](https://github.com/tensorflow/models)
4+
[TensorFlow Models](https://github.com/tensorflow/models).
55

66
This directory is mirrored to https://tensorflow.org/tfmodels, and is mainly
77
concerned with documenting the tools provided in the `tensorflow_models` pip

docs/nlp/fine_tune_bert.ipynb

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -884,7 +884,7 @@
884884
"id": "2oHOql35k3Dd"
885885
},
886886
"source": [
887-
"Note: The pretrained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). Go to the [TF Hub appendix](#hub_bert) for details."
887+
"Note: The pre-trained `TransformerEncoder` is also available on [TensorFlow Hub](https://tensorflow.org/hub). Go to the [TF Hub appendix](#hub_bert) for details."
888888
]
889889
},
890890
{
@@ -1148,8 +1148,8 @@
11481148
"\n",
11491149
"First, build a wrapper class to export the model. This wrapper does two things:\n",
11501150
"\n",
1151-
"- First it packages `bert_inputs_processor` and `bert_classifier` together into a single `tf.Module`, so you can export all the functionalities.\n",
1152-
"- Second it defines a `tf.function` that implements the end-to-end execution of the model.\n",
1151+
"- First, it packages `bert_inputs_processor` and `bert_classifier` together into a single `tf.Module`, so you can export all the functionalities.\n",
1152+
"- Second, it defines a `tf.function` that implements the end-to-end execution of the model.\n",
11531153
"\n",
11541154
"Setting the `input_signature` argument of `tf.function` lets you define a fixed signature for the `tf.function`. This can be less surprising than the default automatic retracing behavior."
11551155
]
@@ -1189,7 +1189,7 @@
11891189
"id": "qnxysGUfIgFQ"
11901190
},
11911191
"source": [
1192-
"Create an instance of this export-model and save it:"
1192+
"Create an instance of this exported model and save it:"
11931193
]
11941194
},
11951195
{
@@ -1280,7 +1280,7 @@
12801280
"id": "CPsg7dZwfBM2"
12811281
},
12821282
"source": [
1283-
"Congratulations! You've used `tensorflow_models` to build a BERT-classifier, train it, and export for later use."
1283+
"Congratulations! You've used `tensorflow_models` to build a BERT-classifier, train it, and export it for later use."
12841284
]
12851285
},
12861286
{
@@ -1391,7 +1391,7 @@
13911391
"id": "cjojn8SmLSRI"
13921392
},
13931393
"source": [
1394-
"At this point it would be simple to add a classification head yourself.\n",
1394+
"At this point, it would be simple to add a classification head yourself.\n",
13951395
"\n",
13961396
"The Model Garden `tfm.nlp.models.BertClassifier` class can also build a classifier onto the TF Hub encoder:"
13971397
]
@@ -1429,7 +1429,7 @@
14291429
"id": "u_IqwXjRV1vd"
14301430
},
14311431
"source": [
1432-
"For concrete examples of this approach, refer to [Solve Glue tasks using BERT](https://www.tensorflow.org/text/tutorials/bert_glue)."
1432+
"For concrete examples of this approach, refer to [Solve Glue tasks using the BERT](https://www.tensorflow.org/text/tutorials/bert_glue)."
14331433
]
14341434
},
14351435
{
@@ -1494,7 +1494,7 @@
14941494
"id": "ywn5miD_dnuh"
14951495
},
14961496
"source": [
1497-
"The advantage to using `config` objects is that they don't contain any complicated TensorFlow objects, and can be easily serialized to JSON, and rebuilt. Here's the JSON for the above `tfm.optimization.OptimizationConfig`:"
1497+
"The advantage of using `config` objects is that they don't contain any complicated TensorFlow objects, and can be easily serialized to JSON, and rebuilt. Here's the JSON for the above `tfm.optimization.OptimizationConfig`:"
14981498
]
14991499
},
15001500
{

official/README-TPU.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,28 +2,28 @@
22

33
## Natural Language Processing
44

5-
* [bert](nlp/bert): A powerful pre-trained language representation model:
5+
* [bert](https://arxiv.org/abs/1810.04805): A powerful pre-trained language representation model:
66
BERT, which stands for Bidirectional Encoder Representations from
77
Transformers.
8-
[BERT FineTuning with Cloud TPU](https://cloud.google.com/tpu/docs/tutorials/bert-2.x) provides step by step instructions on Cloud TPU training. You can look [Bert MNLI Tensorboard.dev metrics](https://tensorboard.dev/experiment/LijZ1IrERxKALQfr76gndA) for MNLI fine tuning task.
8+
[BERT FineTuning with Cloud TPU](https://cloud.google.com/ai-platform/training/docs/algorithms/bert-start) provides step by step instructions on Cloud TPU training. You can look [Bert MNLI Tensorboard.dev metrics](https://tensorboard.dev/experiment/LijZ1IrERxKALQfr76gndA) for MNLI fine tuning task.
99
* [transformer](nlp/transformer): A transformer model to translate the WMT
1010
English to German dataset.
1111
[Training transformer on Cloud TPU](https://cloud.google.com/tpu/docs/tutorials/transformer-2.x) for step by step instructions on Cloud TPU training.
1212

1313
## Computer Vision
1414

15-
* [efficientnet](vision/image_classification): A family of convolutional
15+
* [efficientnet](https://github.com/tensorflow/models/blob/master/official/vision/modeling/backbones/efficientnet.py): A family of convolutional
1616
neural networks that scale by balancing network depth, width, and
1717
resolution and can be used to classify ImageNet's dataset of 1000 classes.
1818
See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/KnaWjrq5TXGfv0NW5m7rpg/#scalars).
19-
* [mnist](vision/image_classification): A basic model to classify digits
19+
* [mnist](https://www.tensorflow.org/datasets/catalog/mnist): A basic model to classify digits
2020
from the MNIST dataset. See [Running MNIST on Cloud TPU](https://cloud.google.com/tpu/docs/tutorials/mnist-2.x) tutorial and [Tensorboard.dev metrics](https://tensorboard.dev/experiment/mIah5lppTASvrHqWrdr6NA).
21-
* [mask-rcnn](vision/detection): An object detection and instance segmentation model. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/LH7k0fMsRwqUAcE09o9kPA).
22-
* [resnet](vision/image_classification): A deep residual network that can
21+
* [mask-rcnn](https://www.tensorflow.org/api_docs/python/tfm/vision/configs/maskrcnn/MaskRCNN): An object detection and instance segmentation model. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/LH7k0fMsRwqUAcE09o9kPA).
22+
* [resnet]((https://www.tensorflow.org/api_docs/python/tfm/vision/configs/image_classification/image_classification_imagenet)): A deep residual network that can
2323
be used to classify ImageNet's dataset of 1000 classes.
2424
See [Training ResNet on Cloud TPU](https://cloud.google.com/tpu/docs/tutorials/resnet-2.x) tutorial and [Tensorboard.dev metrics](https://tensorboard.dev/experiment/CxlDK8YMRrSpYEGtBRpOhg).
25-
* [retinanet](vision/detection): A fast and powerful object detector. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/b8NRnWU3TqG6Rw0UxueU6Q).
26-
* [shapemask](vision/detection): An object detection and instance segmentation model using shape priors. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/ZbXgVoc6Rf6mBRlPj0JpLA).
25+
* [retinanet](https://www.tensorflow.org/api_docs/python/tfm/vision/retinanet): A fast and powerful object detector. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/b8NRnWU3TqG6Rw0UxueU6Q).
26+
* [shapemask](https://cloud.google.com/tpu/docs/tutorials/shapemask-2.x): An object detection and instance segmentation model using shape priors. See [Tensorboard.dev training metrics](https://tensorboard.dev/experiment/ZbXgVoc6Rf6mBRlPj0JpLA).
2727

2828
## Recommendation
2929
* [dlrm](recommendation/ranking): [Deep Learning Recommendation Model for

official/nlp/MODEL_GARDEN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ on how to train models with this codebase.
2626
By default, the experiment runs on GPUs. To run on TPUs, one should overwrite
2727
`runtime.distribution_strategy` and set the tpu address. See [RuntimeConfig](https://github.com/tensorflow/models/blob/master/official/core/config_definitions.py) for details.
2828

29-
In general, the experiments can run with the folloing command by setting the
29+
In general, the experiments can run with the following command by setting the
3030
corresponding `${TASK}`, `${TASK_CONFIG}`, `${MODEL_CONFIG}`.
3131
```
3232
EXPERIMENT=???
@@ -72,7 +72,7 @@ Note that
7272

7373
[How to Train Models](https://github.com/tensorflow/models/blob/master/official/nlp/docs/train.md)
7474

75-
[List of Pretrained Models for finetuning](https://github.com/tensorflow/models/blob/master/official/nlp/docs/pretrained_models.md)
75+
[List of Pre-trained Models for finetuning](https://github.com/tensorflow/models/blob/master/official/nlp/docs/pretrained_models.md)
7676

7777
[How to Publish Models](https://github.com/tensorflow/models/blob/master/official/nlp/docs/tfhub.md)
7878

official/nlp/data/classifier_data_lib.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -668,7 +668,7 @@ def __init__(self,
668668
self._labels = list(range(info.features[self.label_key].num_classes))
669669

670670
def _process_tfds_params_str(self, params_str):
671-
"""Extracts TFDS parameters from a comma-separated assignements string."""
671+
"""Extracts TFDS parameters from a comma-separated assignments string."""
672672
dtype_map = {"int": int, "float": float}
673673
cast_str_to_bool = lambda s: s.lower() not in ["false", "0"]
674674

official/nlp/data/create_finetuning_data.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -165,7 +165,7 @@
165165
"while ALBERT uses SentencePiece tokenizer.")
166166

167167
flags.DEFINE_string(
168-
"tfds_params", "", "Comma-separated list of TFDS parameter assigments for "
168+
"tfds_params", "", "Comma-separated list of TFDS parameter assignments for "
169169
"generic classfication data import (for more details "
170170
"see the TfdsProcessor class documentation).")
171171

@@ -270,7 +270,7 @@ def generate_classifier_dataset():
270270
}
271271
task_name = FLAGS.classification_task_name.lower()
272272
if task_name not in processors:
273-
raise ValueError("Task not found: %s" % (task_name))
273+
raise ValueError("Task not found: %s" % (task_name,))
274274

275275
processor = processors[task_name](process_text_fn=processor_text_fn)
276276
return classifier_data_lib.generate_tf_record_from_data_file(

official/nlp/data/create_pretraining_data.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -453,7 +453,7 @@ def _contiguous(sorted_grams):
453453
def _masking_ngrams(grams, max_ngram_size, max_masked_tokens, rng):
454454
"""Create a list of masking {1, ..., n}-grams from a list of one-grams.
455455
456-
This is an extention of 'whole word masking' to mask multiple, contiguous
456+
This is an extension of 'whole word masking' to mask multiple, contiguous
457457
words such as (e.g., "the red boat").
458458
459459
Each input gram represents the token indices of a single word,
@@ -509,7 +509,7 @@ def _masking_ngrams(grams, max_ngram_size, max_masked_tokens, rng):
509509
rng.shuffle(v)
510510

511511
# Create the weighting for n-gram length selection.
512-
# Stored cummulatively for `random.choices` below.
512+
# Stored cumulatively for `random.choices` below.
513513
cummulative_weights = list(
514514
itertools.accumulate([1./n for n in range(1, max_ngram_size+1)]))
515515

@@ -519,7 +519,7 @@ def _masking_ngrams(grams, max_ngram_size, max_masked_tokens, rng):
519519
# Loop until we have enough masked tokens or there are no more candidate
520520
# n-grams of any length.
521521
# Each code path should ensure one or more elements from `ngrams` are removed
522-
# to guarentee this loop terminates.
522+
# to guarantee this loop terminates.
523523
while (sum(masked_tokens) < max_masked_tokens and
524524
sum(len(s) for s in ngrams.values())):
525525
# Pick an n-gram size based on our weights.

official/nlp/data/create_xlnet_pretraining_data.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -271,7 +271,7 @@ def _create_a_and_b_segments(
271271
Args:
272272
tokens: The 1D input token ids. This represents an individual entry within a
273273
batch.
274-
sentence_ids: The 1D input sentence ids. This represents an indivdual entry
274+
sentence_ids: The 1D input sentence ids. This represents an individual entry
275275
within a batch. This should be the same length as `tokens`.
276276
begin_index: The reference beginning index to split data.
277277
total_length: The target combined length of segments A and B.

official/nlp/data/pretrain_dataloader.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,7 +143,7 @@ class XLNetPretrainDataConfig(cfg.DataConfig):
143143
144144
Attributes:
145145
input_path: See base class.
146-
global_batch_size: See base calss.
146+
global_batch_size: See base class.
147147
is_training: See base class.
148148
seq_length: The length of each sequence.
149149
max_predictions_per_seq: The number of predictions per sequence.
@@ -259,7 +259,7 @@ def _parse(self, record: Mapping[str, tf.Tensor]):
259259
input_mask=input_mask[:self._reuse_length])
260260

261261
# Creates permutation mask and target mask for the rest of tokens in
262-
# current example, which are concatentation of two new segments.
262+
# current example, which are concatenation of two new segments.
263263
perm_mask_1, target_mask_1, tokens_1, masked_1 = self._get_factorization(
264264
inputs[self._reuse_length:], input_mask[self._reuse_length:])
265265

official/nlp/data/squad_lib.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -492,7 +492,7 @@ def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer,
492492
#
493493
# However, this is not always possible. Consider the following:
494494
#
495-
# Question: What country is the top exporter of electornics?
495+
# Question: What country is the top exporter of electronics?
496496
# Context: The Japanese electronics industry is the lagest in the world.
497497
# Answer: Japan
498498
#
@@ -720,7 +720,7 @@ def postprocess_output(all_examples,
720720
start_logit=pred.start_logit,
721721
end_logit=pred.end_logit))
722722

723-
# if we didn't inlude the empty option in the n-best, inlcude it
723+
# if we didn't include the empty option in the n-best, include it
724724
if version_2_with_negative and not xlnet_format:
725725
if "" not in seen_predictions:
726726
nbest.append(
@@ -815,7 +815,7 @@ def get_final_text(pred_text, orig_text, do_lower_case, verbose=False):
815815
# What we really want to return is "Steve Smith".
816816
#
817817
# Therefore, we have to apply a semi-complicated alignment heruistic between
818-
# `pred_text` and `orig_text` to get a character-to-charcter alignment. This
818+
# `pred_text` and `orig_text` to get a character-to-character alignment. This
819819
# can fail in certain cases in which case we just return `orig_text`.
820820

821821
def _strip_spaces(text):

0 commit comments

Comments
 (0)