- Added support for loading pytorch models from HF hub via URL prefix
[hf-pytorch].
- Added support for LoRA training.
- Adding VGG models.
- Fixing bugs in SAM model inference.
- Fixing small bugs in SAM.
- Added Segment Aything models.
- Added post-pooling features for ConvNeXt in feature dictionary.
- Exposed attention map in ViT models.
tfimmnow supports python 3.10.
- Added EfficinentNet and MobileNet-V2 models.
- Added tiny and small ConvNeXt models.
- Preprocessing works for abritrary number of
in_channels. - Removed temporary version restriction on `libclang``
- Adding PiT models
- Simplified
pretrainedparameter increate_function. - Added model-specific cache
- Added adaptability of
in_channels
- Added ConvNeXt models
- Added PoolFormer models
- Improved LR schedulers for training framework
- Improvements to the training framework.
- Small changes to the training framework.
- Added hybrid Vision Transformers (
vit_hybrid). - Added
resnetv2module, which inlcudes Big Transfer (BiT) resnets. - Added Pyramid Vision Transformer models
- Added first version of training framework (
tfimm/train). Still work in progress. Possibly buggy.
- Added option for models to return intermediate features via
return_featuresparameter - Added
DropPathregularization tovitmodule (stochastic depth) - Added ability to load saved models from a local cache
- Fixed bug with dropout in Classifier layer
- Added CaiT models
- Added MLP-Mixer, ResMLP and gMLP models
- Added ResNet models
- Fixed bug with Swin Transformer and mixed precision
- Reduced TF version requirement from 2.5 to 2.4.
- Added ConvMixer models
- Added Swin Transformer models
- Refactored code in
resnet.py. - Added
create_preprocessingfunction to generate model-specific preprocessing. - Added profiling results (max batch size and throughput for inference and
backpropagation) for K80 and V100 (
float32and mixed precision) GPUs. - Fixed bug with ViT models and mixed precision.
- First release.