- add
at_endfeature toSaveModelCallback(#3296), thanks to @tmabraham
- fix fp16 test (#3284), thanks to @tmabraham
- Import
download_urlfrom fastdownload
config.ymlhas been renamed toconfig.ini, and is now inConfigParserformat instead of YAML- THe
_pathsuffixes inconfig.inihave been removed
- Training with
learn.to_fp16() fails with PyTorch 1.9 / Cuda 11.4 (#3438) - pandas 1.3.0 breaks
add_elapsed_times(#3431)
- Latest Pillow v8.3.0 breaks conversion Image to Tensor (#3416)
- QRNN module removed, due to incompatibility with PyTorch 1.9, and lack of utilization of QRNN in the deep learning community. QRNN was our only module that wasn't pure Python, so with this change fastai is now a pure Python package.
- Support for PyTorch 1.9
- Improved LR Suggestions (#3377), thanks to @muellerzr
- SaveModelCallback every nth epoch (#3375), thanks to @KeremTurgutlu
- Send self.loss_func to device if it is an instance of nn.Module (#3395), thanks to @arampacha
- Batch support for more than one image (#3339)
- Changable tfmdlists for TransformBlock, Datasets, DataBlock (#3327)
- convert TensorBBox to TensorBase during compare (#3388), thanks to @kevinbird15
- Check if normalize exists on
_add_norm(#3371), thanks to @renato145
- Add support for pytorch 1.8 (#3349)
- Add support for spacy3 (#3348)
- Add support for Windows. Big thanks to Microsoft for many contributions to get this working
- Timedistributed layer and Image Sequence Tutorial (#3124), thanks to @tcapelle
- Add interactive run logging to AzureMLCallback (#3341), thanks to @yijinlee
- Batch support for more than one image (#3339)
- Have interp use ds_idx, add tests (#3332), thanks to @muellerzr
- Automatically have fastai determine the right device, even with torch DataLoaders (#3330), thanks to @muellerzr
- Add
at_endfeature toSaveModelCallback(#3296), thanks to @tmabraham - Improve inplace params in Tabular's new and allow for new and test_dl to be in place (#3292), thanks to @muellerzr
- Update VSCode & Codespaces dev container (#3280), thanks to @bamurtaugh
- Add max_scale param to RandomResizedCrop(GPU) (#3252), thanks to @kai-tub
- Increase testing granularity for speedup (#3242), thanks to @ddobrinskiy
- Make TTA turn shuffle and drop_last off when using ds_idx (#3347), thanks to @muellerzr
- Add order to TrackerCallback derived classes (#3346), thanks to @muellerzr
- Prevent schedule from crashing close to the end of training (#3335), thanks to @Lewington-pitsos
- Fix ability to use raw pytorch DataLoaders (#3328), thanks to @hamelsmu
- Fix PixelShuffle_icnr weight (#3322), thanks to @pratX
- Creation of new DataLoader in Learner.get_preds has wrong keyword (#3316), thanks to @tcapelle
- Correct layers order in tabular learner (#3314), thanks to @gradientsky
- Fix vmin parameter default (#3305), thanks to @tcapelle
- Ensure call to
one_batchplaces data on the right device (#3298), thanks to @tcapelle - Fix Cutmix Augmentation (#3259), thanks to @MrRobot2211
- Fix custom tokenizers for DataLoaders (#3256), thanks to @iskode
- fix error setting 'tok_tfm' parameter in TextDataloaders.from_folder
- Fix lighting augmentation (#3255), thanks to @kai-tub
- Fix CUDA variable serialization (#3253), thanks to @mszhanyi
- change batch tfms to have the correct dimensionality (#3251), thanks to @trdvangraft
- Ensure add_datepart adds elapsed as numeric column (#3230), thanks to @aberres
- fix optimwrapper to work with
param_groups(#3241), thanks to @tmabraham- OptimWrapper now has a different constructor signature, which makes it easier to wrap PyTorch optimizers
- Support discriminative learning with OptimWrapper (#2829)
- Updated to support adding transforms to multiple dataloaders (#3268), thanks to @marii-moe
- This fixes an issue in 2.2.7 which resulted in incorrect validation metrics when using Normalization
- 2.2.5 was not released correctly - it was actually 2.2.3
- Enhancement: Let TextDataLoaders take in a custom
tok_text_col(#3208), thanks to @muellerzr - Changed dataloaders arguments to have consistent overrides (#3178), thanks to @marii-moe
- Better support for iterable datasets (#3173), thanks to @jcaw
- BrokenProcessPool in
download_images()on Windows (#3196) - error on predict() or using interp with resnet and MixUp (#3180)
- Fix 'cat' attribute with pandas dataframe:
AttributeError: Can only use .cat accessor with a 'category' dtype(#3165), thanks to @dreamflasher cont_cat_splitdoes not support pandas types (#3156)DataBlock.dataloadersdoes not support the advertised "shuffle" argument (#3133)
- Calculate correct
nfincreate_headbased onconcat_pool(#3115), thanks to @muellerzr
- wandb integration failing with latest wandb library (#3066)
Learner.loadandLRFindernot functioning properly for the optimizer states (#2892)
- tensorboard and wandb can not access
smooth_loss(#3131)
- Promote
NativeMixedPrecisionto defaultMixedPrecision(and similar forLearner.to_fp16); oldMixedPrecisionis now calledNonNativeMixedPrecision(#3127)- Use the new
GradientClipcallback instead of theclipparameter to use gradient clipping
- Use the new
- Adding a
Callbackwhich has the same name as an attribute no longer raises an exception (#3109) - RNN training now requires
RNNCallback, but does not requireRNNRegularizer;outandraw_outhave moved toRNNRegularizer(#3108)- Call
rnn_cbsto get all callbacks needed for RNN training, optionally with regularization
- Call
- replace callback
run_afterwithorder; do not runaftercbs on exception (#3101)
- Add
GradientClipcallback (#3107) - Make
Flattencast toTensorBaseto simplify type compatibility (#3106) - make flattened metrics compatible with all tensor subclasses (#3105)
- New class method
TensorBase.register_functo register types for__torch_function__(#3097) - new
dynamicflag for controlling dynamic loss scaling inNativeMixedPrecision(#3096) - remove need to call
to_native_fp32beforepredict; setskippedin NativeMixedPrecision after NaN from dynamic loss scaling (#3095) - make native fp16 extensible with callbacks (#3094)
- Calculate correct
nfincreate_headbased onconcat_pool(#3115) thanks to @muellerzr
- Small DICOM segmentation dataset (#3034), thanks to @moritzschwyzer
NoneType object has no attribute appendin fastbook chapter 6 BIWI example (#3091)
- Refactor MixUp and CutMix into MixHandler (#3037), thanks to @muellerzr
- Refactors into a general MixHandler class, with MixUp and CutMix simply implementing a
before_batchto perform the data augmentation. Seefastai.callback.mixup
- Refactors into a general MixHandler class, with MixUp and CutMix simply implementing a
- Gradient Accumulation + Mixed Precision shows artificially high training loss (#3048)
- Update for fastcore
negate_func->not_ - LR too high for gradient accumulation (#3040), thanks to @marii-moe
- Torchscript transforms incompatibility with nn.Sequential (#2920)
- Pytorch 1.7 subclassing support (#2769)
- unsupported operand type(s) for +=: 'TensorCategory' and 'TensorText' when using AWD_LSTM for text classification (#3027)
- UserWarning when using SaveModelCallback() on after_epoch (#3025)
- Segmentation error: no implementation found for 'torch.nn.functional.cross_entropy' on types that implement torch_function (#3022)
TextDataLoaders.from_df()returnsTypeError: 'float' object is not iterable(#2978)- Internal assert error in awd_qrnn (#2967)
- Option to preserve filenames in
download_images(#2983), thanks to @mess-lelouch - Deprecate
configincreate_cnnand instead pass kwargs directly (#2966), thanks to @borisdayma
- Progress and Recorder callbacks serialize their data, resulting in large Learner export file sizes (#2981)
TextDataLoaders.from_df()returnsTypeError: 'float' object is not iterable(#2978)- "only one element tensors can be converted to Python scalars" exception in Siamese Tutorial (#2973)
- Learn.load and LRFinder not functioning properly for the optimizer states (#2892)
- remove
log_args(#2954)
- Improve performance of
RandomSplitter(h/t @muellerzr) (#2957)
- Exporting TabularLearner via learn.export() leads to huge file size (#2945)
TensorPointobject has no attributeimg_size(#2950)
- moved
has_childrenfromnn.Moduleto free function (#2931)
- Support persistent workers (#2768)
unet_learnersegmentation fails (#2939)- In "Transfer learning in text" tutorial, the "dls.show_batch()" show wrong outputs (#2910)
Learn.loadandLRFindernot functioning properly for the optimizer states (#2892)- Documentation for
Show_Imagesbroken (#2876) - URL link for documentation for
torch_corelibrary from thedoc()method gives incorrect url (#2872)
- Work around broken PyTorch subclassing of some
new_*methods (#2769)
- PyTorch 1.7 compatibility (#2917)
PyTorch 1.7 includes support for tensor subclassing, so we have replaced much of our custom subclassing code with PyTorch's. We have seen a few bugs in PyTorch's subclassing feature, however, so please file an issue if you see any code failing now which was working before.
There is one breaking change in this version of fastai, which is that custom metadata is now stored directly in tensors as standard python attributes, instead of in the special _meta attribute. Only advanced customization of fastai OO tensors would have used this functionality, so if you do not know what this all means, then it means you did not use it.
This version was released after 2.1.0, and adds fastcore 1.3 compatibility, whilst maintaining PyTorch 1.6 compatibility. It has no new features or bug fixes.
The next version of fastai will be 2.1. It will require PyTorch 1.7, which has significant foundational changes. It should not require any code changes except for people doing sophisticated tensor subclassing work, but nonetheless we recommend testing carefully. Therefore, we recommend pinning your fastai version to <2.1 if you are not able to fully test your fastai code when the new version comes out.
- pin pytorch (
<1.7) and torchvision (<0.8) requirements (#2915) - Add version pin for fastcore
- Remove version pin for sentencepiece
- added support for tb projector word embeddings (#2853), thanks to @floleuerer
- Added ability to have variable length draw (#2845), thanks to @marii-moe
- add pip upgrade cell to all notebooks, to ensure colab has current fastai version (#2843)
- loss functions were moved to
loss.py(#2843)
-
new callback event:
after_create(#2842)- This event runs after a
Learneris constructed. It's useful for initial setup which isn't needed for everyfit, but just once for eachLearner(such as setting initial defaults).
- This event runs after a
-
Modified XResNet to support Conv1d / Conv3d (#2744), thanks to @floleuerer
- Supports different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride). Tested with fastai_audio and fastai time series with promising results.
- Undo breaking num_workers fix (#2804)
- Some users found the recent addition of
num_workersto inference functions was causing problems, particularly on Windows. This PR reverts that change, until we find a more reliable way to handlenum_workersfor inference.
- Some users found the recent addition of
- learn.tta() fails on a learner imported with load_learner() (#2764)
- learn.summary() crashes out on 2nd transfer learning (#2735)
- Undo breaking
num_workersfix (#2804)
- Fix
cont_cat_splitfor multi-label classification (#2759) - fastbook error: "index 3 is out of bounds for dimension 0 with size 3" (#2792)
- update for fastcore 1.0.5 (#2775)
- "Remove pandas min version requirement" (#2765)
- Modify XResNet to support Conv1d / Conv3d (#2744)
- Also support different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride).
- Add support for multidimensional arrays for RNNDropout (#2737)
- MCDropoutCallback to enable Monte Carlo Dropout in fastai. (#2733)
- A new callback to enable Monte Carlo Dropout in fastai in the
get_predsmethod. Monte Carlo Dropout is simply enabling dropout during inference. Calling get_preds multiple times and stacking them yield of a distribution of predictions that you can use to evaluate your prediction uncertainty.
- A new callback to enable Monte Carlo Dropout in fastai in the
- adjustable workers in
get_preds(#2721)
- Initial release of v2