-
Notifications
You must be signed in to change notification settings - Fork 53
Open
Description
Hi, I would like to know where the parameters in this transform come from? I don't get the same values if I take the mean and std over the images 'train+unlabeled' split.
I tried the method used to get these values for ImageNet in torchvision described here on various subsets of train+unlabeled and the std gets closer but the mean is still off.
I am resizing first to 128 then center cropping to 96 as you appear to do for validation images.
I will note that I am using TensorFlow and perhaps the resizing method might be having an effect. However I also tried using PyTorch and still could not reproduce these values.
Thanks
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels