Thx for nice practicing about DM.
Actually, I'm really curious about why does not use 'Positional Encoding' (which was used in ViT or VanillaTransformer.. etc..) in self-attention layers?
Is that any reason and can we ensure self-attention in DDPM U-Net can maintain its position(pixel-wise) information?