Skip to content

Commit e8bc932

Browse files
Igor Shilovfacebook-github-bot
authored andcommitted
Clean-up unnecessary warnings (including update to PyTorch 2.0) (#581)
Summary: This PR is a collection of smaller fixes that will save us some deprecation issues in the future ## 1. Updating to PyTorch 2.0 **Key files: grad_sample/functorch.py, requirements.txt** `functorch` has been a part of core PyTorch since 1.13. Now they're going a step further and changing the API, while deprecating the old one. There's a [guide](https://pytorch.org/docs/master/func.migrating.html) on how to migrate. TL;DR - `make_functional` will no longer be part of the API, with `torch.func.functional_call()` being (non drop-in) replacement. They key difference for us is `make_functional()` creates a fresh copy of the module, while `functional_call()` uses existing module. As a matter of fact, we need the fresh copy (otherwise all the hooks start firing and you enter nested madness), so I've copy-pasted a [gist](https://gist.github.com/zou3519/7769506acc899d83ef1464e28f22e6cf) from the official guide on how to get a full replacement for `make_functional`. ## 2. New mechanism for gradient accumulation detection **Key file: privacy_engine.py, grad_sample_module.py** As [reported](https://discuss.pytorch.org/t/gan-raises-userwarning-using-a-non-full-backward-hook-when-the-forward-contains-multiple/175638/2) on the forum, clients are still getting "non-full backward hook" warning even when using `grad_sample_mode="ew"`. Naturally, `functorch` and `hooks` modes rely on backward hooks and can't be migrated to full hooks because [reasons](#328 (comment)). However, `ew` doesn't rely on hooks and it's unclear why the message should appear. The reason, however, is simple. If the client is using poisson sampling we add an extra check to prohibit gradient accumulation (two poisson batches combined is not a poisson batch), and we do that by the means of backward hooks. ~In this case, backward hook serves a simple purpose and there shouldn't be any problems with migrating to the new method, however that involved changing the checking method. That's because `register_backward_hook` is called *after* hooks on submodule, but `register_full_backward_hook` is called before.~ Strikethrough solution didn't work, because hook order execution is weird for complex graphs, e.g. for GANs. For example, if your forward call looks like this: ``` Discriminator(Generator(x)) ``` then top-level module hook will precede submodule's hooks for `Generator`, but not for `Discriminator` As such, I've realised that gradient accumulation is not even supported in `ExpandedWeights`, so we don't have to worry about that. And the other two modes are both hooks-based, so we can just check the accumulation in the existing backward hook, no need for an extra hook. Deleted some code, profit. ## 3. Refactoring `wrap_collate_with_empty` to please pickle Now here're two facts I didn't know before 1) You can't pickle a nested function, e.g. you can't do the following ```python def foo(): def bar(): <...> return bar pickle.dump(foo(), ...) ``` 2) Whether or not `multiprocessing` uses pickle is python- and platform- dependant. This affects our tests when we test `DataLoader` with multiple workers. As such, our data loaders tests: * Pass on CircleCI with python3.9 * Fail on my local machine with python3.9 * Pass on my local machine with python3.7 I'm not sure how cow common the issue is, but it's safer to just refactor `wrap_collate_with_empty` to avoid nested functions. ## 4. Fix benchmark tests We don't really run `benchmarks/tests` on a regular basis, and some of them were broken since we've upgraded to PyTorch 1.13 (`API_CUTOFF_VERSION` doesn't exist anymore) ## 4. Fix flake8 config Flake8 config no [longer support](https://flake8.pycqa.org/en/latest/user/configuration.html) inline comments, fix is due Pull Request resolved: #581 Reviewed By: alexandresablayrolles Differential Revision: D44749760 Pulled By: ffuuugor fbshipit-source-id: cf225f4134c049da4ee2eef53e1af3ef54d090bf
1 parent fc3fd6b commit e8bc932

File tree

13 files changed

+277
-177
lines changed

13 files changed

+277
-177
lines changed

.circleci/config.yml

Lines changed: 25 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -6,16 +6,16 @@ version: 2.1
66

77
commands:
88

9-
py_3_7_setup:
10-
description: "Install and switch to Python 3.7.5; also install pip and pytest."
9+
py_3_9_setup:
10+
description: "Install and switch to Python 3.9; also install pip and pytest."
1111
steps:
1212
- run:
13-
name: "Setup Python v3.7.5 environment"
13+
name: "Setup Python v3.9 environment"
1414
command: |
1515
cd /opt/circleci/.pyenv && git pull && cd -
16-
pyenv install -s 3.7.5
17-
pyenv global 3.7.5
18-
pyenv local 3.7.5
16+
pyenv install -s 3.9.4
17+
pyenv global 3.9.4
18+
pyenv local 3.9.4
1919
pyenv versions
2020
echo "In venv: $(pyenv local) - $(python -V), $(pip -V)"
2121
sudo "$(which python)" -m pip install --upgrade pip
@@ -283,24 +283,16 @@ commands:
283283

284284
jobs:
285285

286-
lint_py37_torch_release:
286+
lint_py39_torch_release:
287287
docker:
288-
- image: cimg/python:3.7.5
288+
- image: cimg/python:3.9
289289
steps:
290290
- checkout
291291
- pip_dev_install
292292
- lint_flake8
293293
- lint_black
294294
- isort
295295

296-
unittest_py37_torch_release:
297-
docker:
298-
- image: cimg/python:3.7.5
299-
steps:
300-
- checkout
301-
- pip_dev_install
302-
- unit_tests
303-
304296
unittest_py38_torch_release:
305297
docker:
306298
- image: cimg/python:3.8
@@ -328,34 +320,34 @@ jobs:
328320

329321
prv_accountant_values:
330322
docker:
331-
- image: cimg/python:3.7.5
323+
- image: cimg/python:3.9
332324
steps:
333325
- checkout
334-
- py_3_7_setup
326+
- py_3_9_setup
335327
- pip_dev_install
336328
- run:
337329
name: "Unit test prv accountant"
338330
no_output_timeout: 1h
339331
command: |
340332
python -m unittest opacus.tests.prv_accountant
341333
342-
integrationtest_py37_torch_release_cpu:
334+
integrationtest_py39_torch_release_cpu:
343335
docker:
344-
- image: cimg/python:3.7.5
336+
- image: cimg/python:3.9
345337
steps:
346338
- checkout
347-
- py_3_7_setup
339+
- py_3_9_setup
348340
- pip_dev_install
349341
- mnist_integration_test:
350342
device: "cpu"
351343

352-
integrationtest_py37_torch_release_cuda:
344+
integrationtest_py39_torch_release_cuda:
353345
machine:
354346
resource_class: gpu.nvidia.small.multi
355347
image: ubuntu-2004-cuda-11.4:202110-01
356348
steps:
357349
- checkout
358-
- py_3_7_setup
350+
- py_3_9_setup
359351
- pip_dev_install
360352
- run_nvidia_smi
361353
- mnist_integration_test:
@@ -369,13 +361,13 @@ jobs:
369361
- dcgan_integration_test:
370362
device: "cuda"
371363

372-
micro_benchmarks_py37_torch_release_cuda:
364+
micro_benchmarks_py39_torch_release_cuda:
373365
machine:
374366
resource_class: gpu.nvidia.small.multi
375367
image: ubuntu-2004-cuda-11.4:202110-01
376368
steps:
377369
- checkout
378-
- py_3_7_setup
370+
- py_3_9_setup
379371
- pip_dev_install
380372
- run_nvidia_smi
381373
- benchmark_layers_integration_test:
@@ -459,7 +451,7 @@ jobs:
459451
image: ubuntu-2004-cuda-11.4:202110-01
460452
steps:
461453
- checkout
462-
- py_3_7_setup
454+
- py_3_9_setup
463455
- pip_dev_install
464456
- run_nvidia_smi
465457
- command_unit_tests_multi_gpu
@@ -493,9 +485,7 @@ workflows:
493485
not:
494486
equal: [ scheduled_pipeline, << pipeline.trigger_source >> ]
495487
jobs:
496-
- lint_py37_torch_release:
497-
filters: *exclude_ghpages
498-
- unittest_py37_torch_release:
488+
- lint_py39_torch_release:
499489
filters: *exclude_ghpages
500490
- unittest_py38_torch_release:
501491
filters: *exclude_ghpages
@@ -505,9 +495,9 @@ workflows:
505495
filters: *exclude_ghpages
506496
- unittest_multi_gpu:
507497
filters: *exclude_ghpages
508-
- integrationtest_py37_torch_release_cpu:
498+
- integrationtest_py39_torch_release_cpu:
509499
filters: *exclude_ghpages
510-
- integrationtest_py37_torch_release_cuda:
500+
- integrationtest_py39_torch_release_cuda:
511501
filters: *exclude_ghpages
512502
- prv_accountant_values:
513503
filters: *exclude_ghpages
@@ -518,12 +508,12 @@ workflows:
518508
jobs:
519509
- unittest_py39_torch_nightly:
520510
filters: *exclude_ghpages
521-
- integrationtest_py37_torch_release_cpu:
511+
- integrationtest_py39_torch_release_cpu:
522512
filters: *exclude_ghpages
523-
- integrationtest_py37_torch_release_cuda:
513+
- integrationtest_py39_torch_release_cuda:
524514
filters: *exclude_ghpages
525-
- lint_py37_torch_release:
515+
- lint_py39_torch_release:
526516
filters: *exclude_ghpages
527-
- micro_benchmarks_py37_torch_release_cuda:
517+
- micro_benchmarks_py39_torch_release_cuda:
528518
filters: *exclude_ghpages
529519

.circleci/flake8_config.ini

Lines changed: 105 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -5,60 +5,113 @@ max-line-length = 80
55
ignore =
66
# Black conflicts and overlaps.
77
# Found in https://github.com/psf/black/issues/429
8-
B950, # Line too long. (Use `arc lint`'s LINEWRAP instead)
9-
E111, # Indentation is not a multiple of four.
10-
E115, # Expected an indented block (comment).
11-
E117, # Over-indented.
12-
E121, # Continuation line under-indented for hanging indent.
13-
E122, # Continuation line missing indentation or outdented.
14-
E123, # Closing bracket does not match indentation of opening bracket's line.
15-
E124, # Closing bracket does not match visual indentation.
16-
E125, # Continuation line with same indent as next logical line.
17-
E126, # Continuation line over-indented for hanging indent.
18-
E127, # Continuation line over-indented for visual indent.
19-
E128, # Continuation line under-indented for visual indent.
20-
E129, # Visually indented line with same indent as next logical line.
21-
E201, # Whitespace after '('.
22-
E202, # Whitespace before ')'.
23-
E203, # Whitespace before ':'.
24-
E221, # Multiple spaces before operator.
25-
E222, # Multiple spaces after operator.
26-
E225, # Missing whitespace around operator.
27-
E226, # Missing whitespace around arithmetic operator.
28-
E227, # Missing whitespace around bitwise or shift operator.
29-
E231, # Missing whitespace after ',', ';', or ':'.
30-
E241, # Multiple spaces after ','.
31-
E251, # Unexpected spaces around keyword / parameter equals.
32-
E261, # At least two spaces before inline comment.
33-
E262, # Inline comment should start with '# '.
34-
E265, # Block comment should start with '# '.
35-
E271, # Multiple spaces after keyword.
36-
E272, # Multiple spaces before keyword.
37-
E301, # Expected 1 blank line, found 0.
38-
E302, # Expected 2 blank lines, found 0.
39-
E303, # Too many blank lines (3).
40-
E305, # Expected 2 blank lines after end of function or class.
41-
E306, # Expected 1 blank line before a nested definition.
42-
E501, # Line too long (82 > 79 characters).
43-
E502, # The backslash is redundant between brackets.
44-
E701, # Multiple statements on one line (colon).
45-
E702, # Multiple statements on one line (semicolon).
46-
E703, # Statement ends with a semicolon.
47-
E704, # Multiple statements on one line (def).
48-
W291, # Trailing whitespace.
49-
W292, # No newline at end of file.
50-
W293, # Blank line contains whitespace.
51-
W391, # Blank line at end of file.
8+
# B950: Line too long. (Use `arc lint`'s LINEWRAP instead)
9+
B950,
10+
# E111: Indentation is not a multiple of four.
11+
E111,
12+
# E115: Expected an indented block (comment).
13+
E115,
14+
# E117: Over-indented.
15+
E117,
16+
# E121: Continuation line under-indented for hanging indent.
17+
E121,
18+
# E122: Continuation line missing indentation or outdented.
19+
E122,
20+
# E123: Closing bracket does not match indentation of opening bracket's line.
21+
E123,
22+
# E124: Closing bracket does not match visual indentation.
23+
E124,
24+
# E125: Continuation line with same indent as next logical line.
25+
E125,
26+
# E126: Continuation line over-indented for hanging indent.
27+
E126,
28+
# E127: Continuation line over-indented for visual indent.
29+
E127,
30+
# E128: Continuation line under-indented for visual indent.
31+
E128,
32+
# E129: Visually indented line with same indent as next logical line.
33+
E129,
34+
# E201: Whitespace after '('.
35+
E201,
36+
# E202: Whitespace before ')'.
37+
E202,
38+
# E203: Whitespace before ':'.
39+
E203,
40+
# E221: Multiple spaces before operator.
41+
E221,
42+
# E222: Multiple spaces after operator.
43+
E222,
44+
# E225: Missing whitespace around operator.
45+
E225,
46+
# E226: Missing whitespace around arithmetic operator.
47+
E226,
48+
# E227: Missing whitespace around bitwise or shift operator.
49+
E227,
50+
# E231: Missing whitespace after ',', ';', or ':'.
51+
E231,
52+
# E241: Multiple spaces after ','.
53+
E241,
54+
# E251: Unexpected spaces around keyword / parameter equals.
55+
E251,
56+
# E261: At least two spaces before inline comment.
57+
E261,
58+
# E262: Inline comment should start with '# '.
59+
E262,
60+
# E265: Block comment should start with '# '.
61+
E265,
62+
# E271: Multiple spaces after keyword.
63+
E271,
64+
# E272: Multiple spaces before keyword.
65+
E272,
66+
# E301: Expected 1 blank line, found 0.
67+
E301,
68+
# E302: Expected 2 blank lines, found 0.
69+
E302,
70+
# E303: Too many blank lines (3).
71+
E303,
72+
# E305: Expected 2 blank lines after end of function or class.
73+
E305,
74+
# E306: Expected 1 blank line before a nested definition.
75+
E306,
76+
# E501: Line too long (82 > 79 characters).
77+
E501,
78+
# E502: The backslash is redundant between brackets.
79+
E502,
80+
# E701: Multiple statements on one line (colon).
81+
E701,
82+
# E702: Multiple statements on one line (semicolon).
83+
E702,
84+
# E703: Statement ends with a semicolon.
85+
E703,
86+
# E704: Multiple statements on one line (def).
87+
E704,
88+
# W291: Trailing whitespace.
89+
W291,
90+
# W292: No newline at end of file.
91+
W292,
92+
# W293: Blank line contains whitespace.
93+
W293,
94+
# W391: Blank line at end of file.
95+
W391,
5296

5397
# Too opinionated.
54-
E265, # Block comment should start with '# '.
55-
E266, # Too many leading '#' for block comment.
56-
E402, # Module level import not at top of file.
57-
E722, # Do not use bare except, specify exception instead. (Duplicate of B001)
58-
F811, # Redefinition of unused name from line n.
59-
P207, # (Duplicate of B003)
60-
P208, # (Duplicate of C403)
61-
W503 # Line break occurred before a binary operator.
98+
# E265: Block comment should start with '# '.
99+
E265,
100+
# E266: Too many leading '#' for block comment.
101+
E266,
102+
# E402: Module level import not at top of file.
103+
E402,
104+
# E722: Do not use bare except, specify exception instead. (Duplicate of B001)
105+
E722,
106+
# F811: Redefinition of unused name from line n.
107+
F811,
108+
# P207: (Duplicate of B003)
109+
P207,
110+
# P208: (Duplicate of C403)
111+
P208,
112+
# W503: Line break occurred before a binary operator.
113+
W503
114+
62115
exclude =
63116
.hg,
64117
__pycache__,

benchmarks/tests/test_layers.py

Lines changed: 7 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -15,36 +15,21 @@
1515
import copy
1616
import logging
1717
import random
18-
from typing import Any, Dict, List, Set, Tuple, Type
18+
from typing import Any, Dict, List, Tuple, Type
1919

2020
import pytest
2121
import torch
2222
import torch.nn as nn
2323
from helpers import skipifnocuda
2424
from opacus.grad_sample import GradSampleModule
25-
from opacus.grad_sample.gsm_exp_weights import (
26-
API_CUTOFF_VERSION,
27-
GradSampleModuleExpandedWeights,
28-
)
25+
from opacus.grad_sample.gsm_exp_weights import GradSampleModuleExpandedWeights
2926
from opacus.layers import DPGRU, DPLSTM, DPRNN, DPMultiheadAttention
3027

3128
from benchmarks.layers import LayerFactory
3229
from benchmarks.utils import reset_peak_memory_stats
3330

3431

35-
def _gsm_modes() -> Set[str]:
36-
ret = ["baseline", "hooks"]
37-
try:
38-
import functorch # noqa: F401, Checking for import errors
39-
40-
ret += ["functorch"]
41-
except ImportError:
42-
pass
43-
44-
if torch.__version__ >= API_CUTOFF_VERSION:
45-
ret += ["ew"]
46-
return set(ret)
47-
32+
GSM_MODES = {"baseline", "hooks", "ew", "functorch"}
4833

4934
PARAMETERS = [
5035
(
@@ -121,7 +106,7 @@ def test_layer_modules(
121106
"""
122107

123108
for layer_name, module, gsm_mode_blocklist in layer_list:
124-
for gsm_mode in _gsm_modes() - set(gsm_mode_blocklist):
109+
for gsm_mode in GSM_MODES - set(gsm_mode_blocklist):
125110
if gsm_mode in gsm_mode_blocklist:
126111
continue
127112

@@ -161,7 +146,7 @@ def test_to_device(
161146
assert reset_peak_memory_stats(cuda).cur_mem == 0
162147

163148
for layer_name, module, gsm_mode_blocklist in layer_list:
164-
for gsm_mode in _gsm_modes() - set(gsm_mode_blocklist):
149+
for gsm_mode in GSM_MODES - set(gsm_mode_blocklist):
165150
layer = LayerFactory.create(
166151
layer_name=layer_name,
167152
batch_size=64,
@@ -207,7 +192,7 @@ def test_layer_outputs(
207192
}
208193

209194
for layer_name, module, gsm_mode_blocklist in layer_list:
210-
for gsm_mode in _gsm_modes() - set(gsm_mode_blocklist):
195+
for gsm_mode in GSM_MODES - set(gsm_mode_blocklist):
211196
for random_seed in (random_seed_a, random_seed_b):
212197
logging.error(f"{gsm_mode}, {layer_name}")
213198
layer = LayerFactory.create(
@@ -246,7 +231,7 @@ def test_forward_backward(
246231
layer_config: config for instantiating the layers in layer_list
247232
"""
248233
for layer_name, module, gsm_mode_blocklist in layer_list:
249-
for gsm_mode in _gsm_modes() - set(gsm_mode_blocklist):
234+
for gsm_mode in GSM_MODES - set(gsm_mode_blocklist):
250235
layer = LayerFactory.create(
251236
layer_name=layer_name,
252237
batch_size=64,

0 commit comments

Comments
 (0)