|
2 | 2 |
|
3 | 3 | ## What's New |
4 | 4 |
|
| 5 | +### Jan 25, 2021 |
| 6 | +* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer |
| 7 | +* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer |
| 8 | +* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support |
| 9 | + * NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning |
| 10 | +* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit |
| 11 | +* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes |
| 12 | +* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script |
| 13 | + * Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2` |
| 14 | +* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar |
| 15 | + * Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp` |
| 16 | +* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling |
| 17 | + |
5 | 18 | ### Jan 3, 2021 |
6 | 19 | * Add SE-ResNet-152D weights |
7 | 20 | * 256x256 val, 0.94 crop top-1 - 83.75 |
@@ -130,7 +143,9 @@ All model architecture families include variants with pretrained weights. The ar |
130 | 143 |
|
131 | 144 | A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/). |
132 | 145 |
|
| 146 | +* Big Transfer ResNetV2 (BiT) - https://arxiv.org/abs/1912.11370 |
133 | 147 | * CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929 |
| 148 | +* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877 |
134 | 149 | * DenseNet - https://arxiv.org/abs/1608.06993 |
135 | 150 | * DLA - https://arxiv.org/abs/1707.06484 |
136 | 151 | * DPN (Dual-Path Network) - https://arxiv.org/abs/1707.01629 |
@@ -242,6 +257,10 @@ One of the greatest assets of PyTorch is the community and their contributions. |
242 | 257 | * Albumentations - https://github.com/albumentations-team/albumentations |
243 | 258 | * Kornia - https://github.com/kornia/kornia |
244 | 259 |
|
| 260 | +### Knowledge Distillation |
| 261 | +* RepDistiller - https://github.com/HobbitLong/RepDistiller |
| 262 | +* torchdistill - https://github.com/yoshitomo-matsubara/torchdistill |
| 263 | + |
245 | 264 | ### Metric Learning |
246 | 265 | * PyTorch Metric Learning - https://github.com/KevinMusgrave/pytorch-metric-learning |
247 | 266 |
|
|
0 commit comments