Replies: 1 comment
-
I've just watched video 162 (https://www.learnpytorch.io/04_pytorch_custom_datasets/#93-construct-and-train-model-1) where Model 1 is trained, which stands for the same TinyVGG architecture, but using a different set of transforms that includes I think a nice experiment to be included (or, at least, suggested) in the course would be training Model 1 with the original images and the transformed images combined in an augmented set. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been experimenting with Transforms, and it seems that when we pass a picture to a transform like RandomPerspective, or Resize, it only outputs 1 picture, where the original is lost.
In data augmentation, we want to train our model on the original one plus the extra transformed one. Right now, the transform pipeline only outputs explicitly the transformed data, i.e. we don't train the model on the original.
Are we losing important information this way?
Shouldn't we find a way to adapt the transform pipeline to output all the data?
Beta Was this translation helpful? Give feedback.
All reactions