Skip to content

Commit b0a0a47

Browse files
alokwilliamFalcon
authored andcommitted
Rename variables (#124)
- data_batch → batch - batch_i → batch_idx - dataloader_i → dataloader_idx - tng → training - training_dataloader → train_dataloader - add_log_row_interval → row_log_interval - gradient_clip → gradient_clip_val - prog → progress - tqdm_dic → tqdm_dict
1 parent 3d16a68 commit b0a0a47

File tree

17 files changed

+198
-198
lines changed

17 files changed

+198
-198
lines changed

README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ class CoolSystem(pl.LightningModule):
110110
return torch.optim.Adam(self.parameters(), lr=0.02)
111111

112112
@pl.data_loader
113-
def tng_dataloader(self):
113+
def train_dataloader(self):
114114
# REQUIRED
115115
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
116116

@@ -177,16 +177,16 @@ You define the blue parts using the LightningModule interface:
177177

178178
```python
179179
# what to do in the training loop
180-
def training_step(self, data_batch, batch_nb):
180+
def training_step(self, batch, batch_nb):
181181

182182
# what to do in the validation loop
183-
def validation_step(self, data_batch, batch_nb):
183+
def validation_step(self, batch, batch_nb):
184184

185185
# how to aggregate validation_step outputs
186186
def validation_end(self, outputs):
187187

188188
# and your dataloaders
189-
def tng_dataloader():
189+
def train_dataloader():
190190
def val_dataloader():
191191
def test_dataloader():
192192
```
@@ -195,8 +195,8 @@ def test_dataloader():
195195

196196
```python
197197
# define what happens for training here
198-
def training_step(self, data_batch, batch_nb):
199-
x, y = data_batch
198+
def training_step(self, batch, batch_nb):
199+
x, y = batch
200200

201201
# define your own forward and loss calculation
202202
hidden_states = self.encoder(x)
@@ -222,8 +222,8 @@ def training_step(self, data_batch, batch_nb):
222222

223223
```python
224224
# define what happens for validation here
225-
def validation_step(self, data_batch, batch_nb):
226-
x, y = data_batch
225+
def validation_step(self, batch, batch_nb):
226+
x, y = batch
227227

228228
# or as basic as a CNN classification
229229
out = self.forward(x)
@@ -248,8 +248,8 @@ def validation_end(self, outputs):
248248

249249
val_loss_mean /= len(outputs)
250250
val_acc_mean /= len(outputs)
251-
tqdm_dic = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
252-
return tqdm_dic
251+
tqdm_dict = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
252+
return tqdm_dict
253253
```
254254

255255
## Tensorboard

docs/LightningModule/RequiredTrainerInterface.md

Lines changed: 39 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Otherwise, to Define a Lightning Module, implement the following methods:
1010
**Required**:
1111

1212
- [training_step](RequiredTrainerInterface.md#training_step)
13-
- [tng_dataloader](RequiredTrainerInterface.md#tng_dataloader)
13+
- [train_dataloader](RequiredTrainerInterface.md#train_dataloader)
1414
- [configure_optimizers](RequiredTrainerInterface.md#configure_optimizers)
1515

1616
**Optional**:
@@ -23,7 +23,7 @@ Otherwise, to Define a Lightning Module, implement the following methods:
2323
- [test_dataloader](RequiredTrainerInterface.md#test_dataloader)
2424
- [on_save_checkpoint](RequiredTrainerInterface.md#on_save_checkpoint)
2525
- [on_load_checkpoint](RequiredTrainerInterface.md#on_load_checkpoint)
26-
- [update_tng_log_metrics](RequiredTrainerInterface.md#update_tng_log_metrics)
26+
- [update_training_log_metrics](RequiredTrainerInterface.md#update_training_log_metrics)
2727
- [add_model_specific_args](RequiredTrainerInterface.md#add_model_specific_args)
2828

2929
---
@@ -81,7 +81,7 @@ class CoolModel(pl.LightningModule):
8181
return [torch.optim.Adam(self.parameters(), lr=0.02)]
8282

8383
@pl.data_loader
84-
def tng_dataloader(self):
84+
def train_dataloader(self):
8585
return DataLoader(MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), batch_size=32)
8686

8787
@pl.data_loader
@@ -111,7 +111,7 @@ The LightningModule interface is on the right. Each method corresponds to a part
111111
### training_step
112112

113113
``` {.python}
114-
def training_step(self, data_batch, batch_nb)
114+
def training_step(self, batch, batch_nb)
115115
```
116116

117117
In this step you'd normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something specific to your model.
@@ -120,7 +120,7 @@ In this step you'd normally do the forward pass and calculate the loss for a bat
120120

121121
| Param | description |
122122
|---|---|
123-
| data_batch | The output of your dataloader. A tensor, tuple or list |
123+
| batch | The output of your dataloader. A tensor, tuple or list |
124124
| batch_nb | Integer displaying which batch this is |
125125

126126
**Return**
@@ -130,22 +130,22 @@ Dictionary or OrderedDict
130130
| key | value | is required |
131131
|---|---|---|
132132
| loss | tensor scalar | Y |
133-
| prog | Dict for progress bar display. Must have only tensors | N |
133+
| progress | Dict for progress bar display. Must have only tensors | N |
134134

135135

136136
**Example**
137137

138138
``` {.python}
139-
def training_step(self, data_batch, batch_nb):
140-
x, y, z = data_batch
139+
def training_step(self, batch, batch_nb):
140+
x, y, z = batch
141141
142142
# implement your own
143143
out = self.forward(x)
144144
loss = self.loss(out, x)
145145
146146
output = {
147147
'loss': loss, # required
148-
'prog': {'tng_loss': loss, 'batch_nb': batch_nb} # optional
148+
'progress': {'training_loss': loss, 'batch_nb': batch_nb} # optional
149149
}
150150
151151
# return a dict
@@ -155,19 +155,19 @@ def training_step(self, data_batch, batch_nb):
155155
If you define multiple optimizers, this step will also be called with an additional ```optimizer_idx``` param.
156156
``` {.python}
157157
# Multiple optimizers (ie: GANs)
158-
def training_step(self, data_batch, batch_nb, optimizer_idx):
158+
def training_step(self, batch, batch_nb, optimizer_idx):
159159
if optimizer_idx == 0:
160160
# do training_step with encoder
161161
if optimizer_idx == 1:
162162
# do training_step with decoder
163163
```
164164

165165
---
166-
### tng_dataloader
166+
### train_dataloader
167167

168168
``` {.python}
169169
@pl.data_loader
170-
def tng_dataloader(self)
170+
def train_dataloader(self)
171171
```
172172
Called by lightning during training loop. Make sure to use the @pl.data_loader decorator, this ensures not calling this function until the data are needed.
173173

@@ -178,7 +178,7 @@ PyTorch DataLoader
178178

179179
``` {.python}
180180
@pl.data_loader
181-
def tng_dataloader(self):
181+
def train_dataloader(self):
182182
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
183183
dataset = MNIST(root='/path/to/mnist/', train=True, transform=transform, download=True)
184184
loader = torch.utils.data.DataLoader(
@@ -240,10 +240,10 @@ the [optimizer_step](https://williamfalcon.github.io/pytorch-lightning/Trainer/h
240240

241241
``` {.python}
242242
# if you have one val dataloader:
243-
def validation_step(self, data_batch, batch_nb)
243+
def validation_step(self, batch, batch_nb)
244244
245245
# if you have multiple val dataloaders:
246-
def validation_step(self, data_batch, batch_nb, dataloader_idx)
246+
def validation_step(self, batch, batch_nb, dataloader_idxdx)
247247
```
248248
**OPTIONAL**
249249
If you don't need to validate you don't need to implement this method. In this step you'd normally generate examples or calculate anything of interest such as accuracy.
@@ -256,9 +256,9 @@ The dict you return here will be available in the `validation_end` method.
256256

257257
| Param | description |
258258
|---|---|
259-
| data_batch | The output of your dataloader. A tensor, tuple or list |
259+
| batch | The output of your dataloader. A tensor, tuple or list |
260260
| batch_nb | Integer displaying which batch this is |
261-
| dataloader_i | Integer displaying which dataloader this is (only if multiple val datasets used) |
261+
| dataloader_idx | Integer displaying which dataloader this is (only if multiple val datasets used) |
262262

263263
**Return**
264264

@@ -270,8 +270,8 @@ The dict you return here will be available in the `validation_end` method.
270270

271271
``` {.python}
272272
# CASE 1: A single validation dataset
273-
def validation_step(self, data_batch, batch_nb):
274-
x, y = data_batch
273+
def validation_step(self, batch, batch_nb):
274+
x, y = batch
275275
276276
# implement your own
277277
out = self.forward(x)
@@ -302,7 +302,7 @@ If you pass in multiple validation datasets, validation_step will have an additi
302302

303303
```python
304304
# CASE 2: multiple validation datasets
305-
def validation_step(self, data_batch, batch_nb, dataset_idx):
305+
def validation_step(self, batch, batch_nb, dataset_idx):
306306
# dataset_idx tells you which dataset this is.
307307
```
308308

@@ -351,8 +351,8 @@ def validation_end(self, outputs):
351351
352352
val_loss_mean /= len(outputs)
353353
val_acc_mean /= len(outputs)
354-
tqdm_dic = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
355-
return tqdm_dic
354+
tqdm_dict = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
355+
return tqdm_dict
356356
```
357357

358358
With multiple dataloaders, `outputs` will be a list of lists. The outer list contains
@@ -377,18 +377,18 @@ def validation_end(self, outputs):
377377
378378
val_loss_mean /= i
379379
val_acc_mean /= i
380-
tqdm_dic = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
381-
return tqdm_dic
380+
tqdm_dict = {'val_loss': val_loss_mean.item(), 'val_acc': val_acc_mean.item()}
381+
return tqdm_dict
382382
```
383383

384384
### test_step
385385

386386
``` {.python}
387387
# if you have one test dataloader:
388-
def test_step(self, data_batch, batch_nb)
388+
def test_step(self, batch, batch_nb)
389389
390390
# if you have multiple test dataloaders:
391-
def test_step(self, data_batch, batch_nb, dataloader_idx)
391+
def test_step(self, batch, batch_nb, dataloader_idxdx)
392392
```
393393
**OPTIONAL**
394394
If you don't need to test you don't need to implement this method. In this step you'd normally generate examples or calculate anything of interest such as accuracy.
@@ -403,9 +403,9 @@ This function is used when you execute `trainer.test()`.
403403

404404
| Param | description |
405405
|---|---|
406-
| data_batch | The output of your dataloader. A tensor, tuple or list |
406+
| batch | The output of your dataloader. A tensor, tuple or list |
407407
| batch_nb | Integer displaying which batch this is |
408-
| dataloader_i | Integer displaying which dataloader this is (only if multiple test datasets used) |
408+
| dataloader_idx | Integer displaying which dataloader this is (only if multiple test datasets used) |
409409

410410
**Return**
411411

@@ -417,8 +417,8 @@ This function is used when you execute `trainer.test()`.
417417

418418
``` {.python}
419419
# CASE 1: A single test dataset
420-
def test_step(self, data_batch, batch_nb):
421-
x, y = data_batch
420+
def test_step(self, batch, batch_nb):
421+
x, y = batch
422422
423423
# implement your own
424424
out = self.forward(x)
@@ -443,7 +443,7 @@ If you pass in multiple test datasets, test_step will have an additional argumen
443443

444444
```python
445445
# CASE 2: multiple test datasets
446-
def test_step(self, data_batch, batch_nb, dataset_idx):
446+
def test_step(self, batch, batch_nb, dataset_idx):
447447
# dataset_idx tells you which dataset this is.
448448
```
449449

@@ -490,8 +490,8 @@ def test_end(self, outputs):
490490
491491
test_loss_mean /= len(outputs)
492492
test_acc_mean /= len(outputs)
493-
tqdm_dic = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
494-
return tqdm_dic
493+
tqdm_dict = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
494+
return tqdm_dict
495495
```
496496

497497
With multiple dataloaders, `outputs` will be a list of lists. The outer list contains
@@ -516,8 +516,8 @@ def test_end(self, outputs):
516516
517517
test_loss_mean /= i
518518
test_acc_mean /= i
519-
tqdm_dic = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
520-
return tqdm_dic
519+
tqdm_dict = {'test_loss': test_loss_mean.item(), 'test_acc': test_acc_mean.item()}
520+
return tqdm_dict
521521
```
522522

523523
---
@@ -633,10 +633,10 @@ def test_dataloader(self):
633633
```
634634

635635
---
636-
### update_tng_log_metrics
636+
### update_training_log_metrics
637637

638638
``` {.python}
639-
def update_tng_log_metrics(self, logs)
639+
def update_training_log_metrics(self, logs)
640640
```
641641
Called by lightning right before it logs metrics for this batch.
642642
This is a chance to amend or add to the metrics about to be logged.
@@ -647,7 +647,7 @@ Dict
647647
**Example**
648648

649649
``` {.python}
650-
def update_tng_log_metrics(self, logs):
650+
def update_training_log_metrics(self, logs):
651651
# modify or add to logs
652652
return logs
653653
```
@@ -674,7 +674,7 @@ def add_model_specific_args(parent_parser, root_dir):
674674
parser = HyperOptArgumentParser(strategy=parent_parser.strategy, parents=[parent_parser])
675675
676676
# param overwrites
677-
# parser.set_defaults(gradient_clip=5.0)
677+
# parser.set_defaults(gradient_clip_val=5.0)
678678
679679
# network params
680680
parser.opt_list('--drop_prob', default=0.2, options=[0.2, 0.5], type=float, tunable=False)

docs/LightningModule/properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ self.experiment.add_scalars(...)
2222
Total training batches seen across all epochs
2323

2424
---
25-
#### gradient_clip
25+
#### gradient_clip_val
2626
The current gradient clip value
2727

2828
---

docs/Trainer/Logging.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ trainer = Trainer(show_progress_bar=True)
1313
Every k batches lightning will make an entry in the metrics log
1414
``` {.python}
1515
# DEFAULT (ie: save a .csv log file every 10 batches)
16-
trainer = Trainer(add_log_row_interval=10)
16+
trainer = Trainer(row_log_interval=10)
1717
```
1818

1919
---

docs/Trainer/Training Loop.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,10 +52,10 @@ Specifically, this will [clip the gradient norm computed over all model paramete
5252

5353
``` {.python}
5454
# DEFAULT (ie: don't clip)
55-
trainer = Trainer(gradient_clip=0)
55+
trainer = Trainer(gradient_clip_val=0)
5656
5757
# clip gradients with norm above 0.5
58-
trainer = Trainer(gradient_clip=0.5)
58+
trainer = Trainer(gradient_clip_val=0.5)
5959
```
6060

6161
---

docs/Trainer/hooks.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,12 +58,12 @@ def on_post_performance_check(self):
5858
```
5959

6060
---
61-
#### on_tng_metrics
61+
#### on_training_metrics
6262
Called in the training loop, right before metrics are logged.
6363
Although you can log at any time by using self.experiment, you can use
6464
this callback to modify what will be logged.
6565
```python
66-
def on_tng_metrics(self, metrics):
66+
def on_training_metrics(self, metrics):
6767
# do something before validation end
6868
```
6969

0 commit comments

Comments
 (0)