Skip to content

Commit 6d08a08

Browse files
committed
fixing spelling errors, slight change to # of iterations to generate a better confusion matrix
1 parent b0202ec commit 6d08a08

File tree

2 files changed

+12
-6
lines changed

2 files changed

+12
-6
lines changed

en-wordlist.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,8 @@ Minifier
147147
MobileNet
148148
ModelABC
149149
Mypy
150+
NameData
151+
NamesDataset
150152
NAS
151153
NCCL
152154
NCHW
@@ -359,6 +361,7 @@ enum
359361
eq
360362
equalities
361363
et
364+
eval
362365
evaluateInput
363366
extensibility
364367
fastai
@@ -513,6 +516,7 @@ resnet
513516
restride
514517
rewinded
515518
rgb
519+
rnn
516520
rollout
517521
rollouts
518522
romanized
@@ -580,12 +584,14 @@ traceback
580584
tradeoff
581585
tradeoffs
582586
triton
587+
txt
583588
uint
584589
umap
585590
uncomment
586591
uncommented
587592
underflowing
588593
unfused
594+
unicode
589595
unimodal
590596
unnormalized
591597
unoptimized

intermediate_source/char_rnn_classification_tutorial.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,8 +66,8 @@
6666
There are two key pieces of this that we will flesh out over the course of this tutorial. First is the basic data
6767
object which a label and some text. In this instance, label = the country of origin and text = the name.
6868
69-
However, our data has some issues that we will need to clean up. First off, we need to convert unicode to plain ASCII to
70-
limit the RNN input layers. This is accomplished by converting unicode strings to ASCII and allowing a samll set of allowed characters (allowed_characters)
69+
However, our data has some issues that we will need to clean up. First off, we need to convert Unicode to plain ASCII to
70+
limit the RNN input layers. This is accomplished by converting Unicode strings to ASCII and allowing a small set of allowed characters (allowed_characters)
7171
"""
7272

7373
import torch
@@ -218,7 +218,7 @@ def __getitem__(self, idx):
218218
print(f"example = {alldata[0]}")
219219

220220
#########################
221-
#Using the dataset object allows us to easily split the data into train and test sets. Here we create na 80/20
221+
#Using the dataset object allows us to easily split the data into train and test sets. Here we create a 80/20
222222
#split but the torch.utils.data has more useful utilities.
223223

224224
train_set, test_set = torch.utils.data.random_split(alldata, [.8, .2])
@@ -227,7 +227,7 @@ def __getitem__(self, idx):
227227

228228
#########################
229229
#Now we have a basic dataset containing 20074 examples where each example is a pairing of label and name. We have also
230-
#split the datset into training and testing so we can validate the model that we build.
230+
#split the dataset into training and testing so we can validate the model that we build.
231231

232232

233233
######################################################################
@@ -397,7 +397,7 @@ def label_from_output(self, output):
397397
label_i = top_i[0].item()
398398
return self.output_labels[label_i], label_i
399399

400-
def learn(self, training_data, n_epoch = 1000, n_batch_size = 64, report_every = 50, learning_rate = 0.005, criterion = nn.NLLLoss()):
400+
def learn(self, training_data, n_epoch = 250, n_batch_size = 64, report_every = 50, learning_rate = 0.005, criterion = nn.NLLLoss()):
401401
"""
402402
Learn on a batch of training_data for a specified number of iterations and reporting thresholds
403403
"""
@@ -480,7 +480,7 @@ def evaluate(rnn, testing_data):
480480
confusion = torch.zeros(len(rnn.output_labels), len(rnn.output_labels))
481481

482482
rnn.eval() #set to eval mode
483-
with torch.no_grad(): # do not record the gradiants during eval phase
483+
with torch.no_grad(): # do not record the gradients during eval phase
484484
for i in range(len(testing_data)):
485485
(label_tensor, text_tensor, label, text) = testing_data[i]
486486
output = rnn.forward(text_tensor)

0 commit comments

Comments
 (0)