For certain datasets, e.g. Yearly/Quarterly/Monthly M4 datasets, the quantity history_size is set to 1.5, leading window_sampling_limit to be 1.5 times of the horizon length. Yet, the input size could be up to 7 times the horizon length, meaning that during training phase the model mostly observes padding. Is this an issue, and possibly leading to a degradation in performance in these dataset?