Inquiry Regarding Downsampling Strategy Used in Tutorial-1 of Magnet Challenge 2 #10
Replies: 1 comment
-
|
Hi Piyush, You are correct, the training data is heavily downsampled from the original full training sequence length. You may find the code which pre-process this in the Step_1_SaveTrain.m in the challenge 2 github or MagNetX github. Our model requires data such as B(t) and H(t) past 80-step memories, as well as next step B_(t+1) step, which outputs H_(t+1). However, if we training the full sequence length which is in the magnitude of 10,000 for some of them, you would have to save the aforementioned data 10,000 times which would take up large amount of space. Therefore, we need to select much fewer points to train for a sequence, thus having to downsample by only selecting a few samples per sequence to train (50 points a sequence in this case). Best regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Princeton Team,
We have been analyzing the data that was given in Tutorial-1 (specifically the 3C90_Training_Data.h5 files) for Magnet Challenge 2 and observed the dataset appears to be preprocessed as well as downsampled.
To better correlate the downsampled data with the original test data and understand how the essential features have been preserved during the training phase of the N-N, it would be helpful if you could provide us more information about the downsampling technique used in Tutorial-1. This information would be beneficial for our starting point to interpret the training data as well as compare it to the raw measurements.
Best regards,
Piyush Chauhan
Georgia Institute of Technology
Beta Was this translation helpful? Give feedback.
All reactions