-
Notifications
You must be signed in to change notification settings - Fork 836
[ENH] add feature scaling support for EncoderDecoderDataModule
#1983
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
PranavBhatP
wants to merge
48
commits into
sktime:main
Choose a base branch
from
PranavBhatP:feature-scaling
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+715
−18
Open
Changes from all commits
Commits
Show all changes
48 commits
Select commit
Hold shift + click to select a range
49c3ab4
add feature scaling to d2
PranavBhatP 1942383
Merge branch 'main' into feature-scaling
PranavBhatP 403e0f2
fix incorrect orig_idx index
PranavBhatP 5661be4
fix incorrect attibute
PranavBhatP 38cefe4
handle unfitted scalers
PranavBhatP f242290
change accelerator to cpu in v2 notebook cell 10
PranavBhatP 54da1c4
use torch.from_numpy instead of torch.tensor for numpy to torch conve…
PranavBhatP c145e9b
revert accelerator mode to auto from cpu for example notebook trainin…
PranavBhatP ec4cf03
potential fix for issue in trainingof v2
PranavBhatP 18f2b2a
replace MAE() with nn.L1Loss() to fix notebook test failures
PranavBhatP 5c99959
Merge branch 'main' into feature-scaling
PranavBhatP 85ba7cb
Merge branch 'main' into feature-scaling
PranavBhatP d96aed5
Merge branch 'main' into feature-scaling
PranavBhatP fd8411a
revert notebook state
PranavBhatP 0830090
Merge branch 'main' into feature-scaling
PranavBhatP ff42a1b
some changes to data module - incomplete
PranavBhatP f86f9a5
fix scaling and target norm - working
PranavBhatP 728cfad
remove target_scale and add target_normalizer instead
PranavBhatP 091d0f8
restore original notebook
PranavBhatP 6d38331
revert breaking change on target scale
PranavBhatP 4ff3444
Merge branch 'main' into feature-scaling
PranavBhatP ca5cb97
separate concerns for feature scaling and target normalizers inside _…
PranavBhatP 77dc43f
fix multi target handling during normalization
PranavBhatP 757fe83
fix data module output format
PranavBhatP bceb0e3
add tests for feature scaling and norm
PranavBhatP c07a343
remove unecessary dataset param from internal D2 dataset class
PranavBhatP 44e0545
add encoder normalizer support for data module
PranavBhatP 96669f1
Merge branch 'main' into feature-scaling
PranavBhatP e927ec2
remove contentious line for testing normalizer behavior
PranavBhatP b45279a
add validation for preprocessing in test and predict dataset
PranavBhatP 67aba1b
skip check for fitting before preprocessing
PranavBhatP f10fbaf
Merge branch 'main' into feature-scaling
PranavBhatP 94ca96a
Merge branch 'main' into feature-scaling
PranavBhatP 93159fb
add loading and saving of normalizer and scaler metadata for use acro…
PranavBhatP 3665577
improve test suite for normalizer and scalers
PranavBhatP 3cb479b
save and load scaler in base_pkg class
PranavBhatP f0969d7
fix saving state of scalers when save_ckpt is false in model_pkg fit …
PranavBhatP 9355c01
avoid call to save and fit scalers in data modules which do not suppo…
PranavBhatP 67b1f59
Merge branch 'main' into feature-scaling
PranavBhatP adc46c2
Merge branch 'main' into feature-scaling
PranavBhatP ce5f904
Merge branch 'main' of https://www.github.com/PranavBhatP/pytorch-for…
PranavBhatP 5c344f8
Merge branch 'feature-scaling' of https://www.github.com/PranavBhatP/…
PranavBhatP 8324d14
revert change in logic for handling datamodules reused for the test/p…
PranavBhatP 32256d1
change base pkg code to handle saving and loading, while dm handles v…
PranavBhatP 5028cee
Merge branch 'main' into feature-scaling
PranavBhatP f934b28
move persistence logic completely into base package
PranavBhatP 19dd617
Merge branch 'main' into feature-scaling
PranavBhatP d7f1af5
raise warning for data modules not supporting feature scaling
PranavBhatP File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we expose this param? I mean if the user is checkpointing the model, they will always save the scalers, no? Are there any cases when the user only uses the models and not the scalers?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not 100% sure about whether this param should be exposed, I was thinking maybe due to some reason the user might not want to save their scalers after training due to memory constraints or simply because they know that they are not going to be testing or using the scalers afterwards in a new session, hence won't require any kind of persistence?