Skip to content

Commit df6a17f

Browse files
ashlevetesfaldetjohnnynunez
authored
Release v2.0.0 (#545)
* Support for logging with Aim (#534) * Update template to Lightning 2.0 (#548) * Update pre-commit hooks (#549) * Refactor utils (#541) * Add option for pytorch 2.0 model compilation (#550) * Update `README.md` (#551) --------- Co-authored-by: Mattie Tesfaldet <mattie@meta.com> Co-authored-by: Johnny <johnnynuca14@gmail.com>
1 parent adc6afe commit df6a17f

30 files changed

+287
-244
lines changed

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,3 +149,6 @@ configs/local/default.yaml
149149
/data/
150150
/logs/
151151
.env
152+
153+
# Aim logging
154+
.aim

.pre-commit-config.yaml

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ default_language_version:
33

44
repos:
55
- repo: https://github.com/pre-commit/pre-commit-hooks
6-
rev: v4.3.0
6+
rev: v4.4.0
77
hooks:
88
# list of supported hooks: https://pre-commit.com/hooks.html
99
- id: trailing-whitespace
@@ -19,7 +19,7 @@ repos:
1919

2020
# python code formatting
2121
- repo: https://github.com/psf/black
22-
rev: 22.6.0
22+
rev: 23.1.0
2323
hooks:
2424
- id: black
2525
args: [--line-length, "99"]
@@ -33,21 +33,21 @@ repos:
3333

3434
# python upgrading syntax to newer version
3535
- repo: https://github.com/asottile/pyupgrade
36-
rev: v2.32.1
36+
rev: v3.3.1
3737
hooks:
3838
- id: pyupgrade
3939
args: [--py38-plus]
4040

4141
# python docstring formatting
4242
- repo: https://github.com/myint/docformatter
43-
rev: v1.4
43+
rev: v1.5.1
4444
hooks:
4545
- id: docformatter
4646
args: [--in-place, --wrap-summaries=99, --wrap-descriptions=99]
4747

4848
# python check (PEP8), programming errors and code complexity
4949
- repo: https://github.com/PyCQA/flake8
50-
rev: 4.0.1
50+
rev: 6.0.0
5151
hooks:
5252
- id: flake8
5353
args:
@@ -60,28 +60,28 @@ repos:
6060

6161
# python security linter
6262
- repo: https://github.com/PyCQA/bandit
63-
rev: "1.7.1"
63+
rev: "1.7.5"
6464
hooks:
6565
- id: bandit
6666
args: ["-s", "B101"]
6767

6868
# yaml formatting
6969
- repo: https://github.com/pre-commit/mirrors-prettier
70-
rev: v2.7.1
70+
rev: v3.0.0-alpha.6
7171
hooks:
7272
- id: prettier
7373
types: [yaml]
7474
exclude: "environment.yaml"
7575

7676
# shell scripts linter
7777
- repo: https://github.com/shellcheck-py/shellcheck-py
78-
rev: v0.8.0.4
78+
rev: v0.9.0.2
7979
hooks:
8080
- id: shellcheck
8181

8282
# md formatting
8383
- repo: https://github.com/executablebooks/mdformat
84-
rev: 0.7.14
84+
rev: 0.7.16
8585
hooks:
8686
- id: mdformat
8787
args: ["--number"]
@@ -94,7 +94,7 @@ repos:
9494

9595
# word spelling linter
9696
- repo: https://github.com/codespell-project/codespell
97-
rev: v2.1.0
97+
rev: v2.2.4
9898
hooks:
9999
- id: codespell
100100
args:
@@ -103,13 +103,13 @@ repos:
103103

104104
# jupyter notebook cell output clearing
105105
- repo: https://github.com/kynan/nbstripout
106-
rev: 0.5.0
106+
rev: 0.6.1
107107
hooks:
108108
- id: nbstripout
109109

110110
# jupyter notebook linting
111111
- repo: https://github.com/nbQA-dev/nbQA
112-
rev: 1.4.0
112+
rev: 1.6.3
113113
hooks:
114114
- id: nbqa-black
115115
args: ["--line-length=99"]

README.md

Lines changed: 35 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -28,8 +28,8 @@ _Suggestions are always welcome!_
2828

2929
**Why you might want to use it:**
3030

31-
Speed <br>
32-
Rapidly iterate over models, datasets, tasks and experiments on different accelerators like multi-GPUs or TPUs.
31+
Save on boilerplate <br>
32+
Easily add new models, datasets, tasks, experiments, and train on different accelerators, like multi-GPU, TPU or SLURM clusters.
3333

3434
✅ Education <br>
3535
Thoroughly commented. You can use this repo as a learning resource.
@@ -46,7 +46,10 @@ Lightning and Hydra are still evolving and integrate many libraries, which means
4646
Template is not really adjusted for building data pipelines that depend on each other. It's more efficient to use it for model prototyping on ready-to-use data.
4747

4848
❌ Overfitted to simple use case <br>
49-
The configuration setup is built with simple lightning training in mind. You might need to put some effort to adjust it for different use cases, e.g. lightning lite.
49+
The configuration setup is built with simple lightning training in mind. You might need to put some effort to adjust it for different use cases, e.g. lightning fabric.
50+
51+
❌ Might not support your workflow <br>
52+
For example, you can't resume hydra-based multirun or hyperparameter search.
5053

5154
> **Note**: _Keep in mind this is unofficial community project._
5255
@@ -319,9 +322,6 @@ python train.py debug=overfit
319322
# raise exception if there are any numerical anomalies in tensors, like NaN or +/-inf
320323
python train.py +trainer.detect_anomaly=true
321324

322-
# log second gradient norm of the model
323-
python train.py +trainer.track_grad_norm=2
324-
325325
# use only 20% of the data
326326
python train.py +trainer.limit_train_batches=0.2 \
327327
+trainer.limit_val_batches=0.2 +trainer.limit_test_batches=0.2
@@ -435,6 +435,12 @@ pre-commit run -a
435435

436436
> **Note**: Apply pre-commit hooks to do things like auto-formatting code and configs, performing code analysis or removing output from jupyter notebooks. See [# Best Practices](#best-practices) for more.
437437
438+
Update pre-commit hook versions in `.pre-commit-config.yaml` with:
439+
440+
```bash
441+
pre-commit autoupdate
442+
```
443+
438444
</details>
439445

440446
<details>
@@ -818,7 +824,7 @@ You can use different optimization frameworks integrated with Hydra, like [Optun
818824

819825
The `optimization_results.yaml` will be available under `logs/task_name/multirun` folder.
820826

821-
This approach doesn't support advanced techniques like prunning - for more sophisticated search, you should probably write a dedicated optimization task (without multirun feature).
827+
This approach doesn't support resuming interrupted search and advanced techniques like prunning - for more sophisticated search and workflows, you should probably write a dedicated optimization task (without multirun feature).
822828

823829
<br>
824830

@@ -889,10 +895,13 @@ def on_train_start(self):
889895
## Best Practices
890896

891897
<details>
892-
<summary><b>Use Miniconda for GPU environments</b></summary>
898+
<summary><b>Use Miniconda</b></summary>
899+
900+
It's usually unnecessary to install full anaconda environment, miniconda should be enough (weights around 80MB).
901+
902+
Big advantage of conda is that it allows for installing packages without requiring certain compilers or libraries to be available in the system (since it installs precompiled binaries), so it often makes it easier to install some dependencies e.g. cudatoolkit for GPU support.
893903

894-
It's usually unnecessary to install full anaconda environment, miniconda should be enough.
895-
It often makes it easier to install some dependencies, like cudatoolkit for GPU support. It also allows you to access your environments globally.
904+
It also allows you to access your environments globally which might be more convenient than creating new local environment for every project.
896905

897906
Example installation:
898907

@@ -901,6 +910,12 @@ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
901910
bash Miniconda3-latest-Linux-x86_64.sh
902911
```
903912

913+
Update conda:
914+
915+
```bash
916+
conda update -n base -c defaults conda
917+
```
918+
904919
Create new conda environment:
905920

906921
```bash
@@ -934,6 +949,12 @@ To reformat all files in the project use command:
934949
pre-commit run -a
935950
```
936951

952+
To update hook versions in [.pre-commit-config.yaml](.pre-commit-config.yaml) use:
953+
954+
```bash
955+
pre-commit autoupdate
956+
```
957+
937958
</details>
938959

939960
<details>
@@ -1035,7 +1056,7 @@ The style guide is available [here](https://pytorch-lightning.readthedocs.io/en/
10351056
def training_step_end():
10361057
...
10371058
1038-
def training_epoch_end():
1059+
def on_train_epoch_end():
10391060
...
10401061
10411062
def validation_step():
@@ -1044,7 +1065,7 @@ The style guide is available [here](https://pytorch-lightning.readthedocs.io/en/
10441065
def validation_step_end():
10451066
...
10461067
1047-
def validation_epoch_end():
1068+
def on_validation_epoch_end():
10481069
...
10491070
10501071
def test_step():
@@ -1053,7 +1074,7 @@ The style guide is available [here](https://pytorch-lightning.readthedocs.io/en/
10531074
def test_step_end():
10541075
...
10551076
1056-
def test_epoch_end():
1077+
def on_test_epoch_end():
10571078
...
10581079
10591080
def configure_optimizers():
@@ -1245,7 +1266,7 @@ git clone https://github.com/YourGithubName/your-repo-name
12451266
cd your-repo-name
12461267

12471268
# create conda environment and install dependencies
1248-
conda env create -f environment.yaml
1269+
conda env create -f environment.yaml -n myenv
12491270

12501271
# activate conda environment
12511272
conda activate myenv

configs/callbacks/early_stopping.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.EarlyStopping.html
1+
# https://pytorch-lightning.readthedocs.io/en/latest/api/lightning.callbacks.EarlyStopping.html
22

33
# Monitor a metric and stop training when it stops improving.
44
# Look at the above link for more detailed information.
55
early_stopping:
6-
_target_: pytorch_lightning.callbacks.EarlyStopping
6+
_target_: lightning.pytorch.callbacks.EarlyStopping
77
monitor: ??? # quantity to be monitored, must be specified !!!
88
min_delta: 0. # minimum change in the monitored quantity to qualify as an improvement
99
patience: 3 # number of checks with no improvement after which training will be stopped

configs/callbacks/model_checkpoint.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
1-
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.ModelCheckpoint.html
1+
# https://pytorch-lightning.readthedocs.io/en/latest/api/lightning.callbacks.ModelCheckpoint.html
22

33
# Save the model periodically by monitoring a quantity.
44
# Look at the above link for more detailed information.
55
model_checkpoint:
6-
_target_: pytorch_lightning.callbacks.ModelCheckpoint
6+
_target_: lightning.pytorch.callbacks.ModelCheckpoint
77
dirpath: null # directory to save the model file
88
filename: null # checkpoint filename
99
monitor: null # name of the logged metric which determines when model is improving
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
1-
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichModelSummary.html
1+
# https://pytorch-lightning.readthedocs.io/en/latest/api/lightning.callbacks.RichModelSummary.html
22

33
# Generates a summary of all layers in a LightningModule with rich text formatting.
44
# Look at the above link for more detailed information.
55
model_summary:
6-
_target_: pytorch_lightning.callbacks.RichModelSummary
6+
_target_: lightning.pytorch.callbacks.RichModelSummary
77
max_depth: 1 # the maximum depth of layer nesting that the summary will include
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
# https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.callbacks.RichProgressBar.html
1+
# https://pytorch-lightning.readthedocs.io/en/latest/api/lightning.callbacks.RichProgressBar.html
22

33
# Create a progress bar with rich text formatting.
44
# Look at the above link for more detailed information.
55
rich_progress_bar:
6-
_target_: pytorch_lightning.callbacks.RichProgressBar
6+
_target_: lightning.pytorch.callbacks.RichProgressBar

configs/experiment/example.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,3 +36,5 @@ logger:
3636
wandb:
3737
tags: ${tags}
3838
group: "mnist"
39+
aim:
40+
experiment: "mnist"

configs/logger/aim.yaml

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# https://aimstack.io/
2+
3+
# example usage in lightning module:
4+
# https://github.com/aimhubio/aim/blob/main/examples/pytorch_lightning_track.py
5+
6+
# open the Aim UI with the following command (run in the folder containing the `.aim` folder):
7+
# `aim up`
8+
9+
aim:
10+
_target_: aim.pytorch_lightning.AimLogger
11+
repo: ${paths.root_dir} # .aim folder will be created here
12+
# repo: "aim://ip_address:port" # can instead provide IP address pointing to Aim remote tracking server which manages the repo, see https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html#
13+
14+
# aim allows to group runs under experiment name
15+
experiment: null # any string, set to "default" if not specified
16+
17+
train_metric_prefix: "train/"
18+
val_metric_prefix: "val/"
19+
test_metric_prefix: "test/"
20+
21+
# sets the tracking interval in seconds for system usage metrics (CPU, GPU, memory, etc.)
22+
system_tracking_interval: 10 # set to null to disable system metrics tracking
23+
24+
# enable/disable logging of system params such as installed packages, git info, env vars, etc.
25+
log_system_params: true
26+
27+
# enable/disable tracking console logs (default value is true)
28+
capture_terminal_logs: false # set to false to avoid infinite console log loop issue https://github.com/aimhubio/aim/issues/2550

configs/logger/comet.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# https://www.comet.ml
22

33
comet:
4-
_target_: pytorch_lightning.loggers.comet.CometLogger
4+
_target_: lightning.pytorch.loggers.comet.CometLogger
55
api_key: ${oc.env:COMET_API_TOKEN} # api key is loaded from environment variable
66
save_dir: "${paths.output_dir}"
77
project_name: "lightning-hydra-template"

0 commit comments

Comments
 (0)