Releases: pytorch/ignite
State parameter scheduler, loggers improvements and bug fixes
PyTorch-Ignite 0.4.7 - Release Notes
New Features
- Enabled 
LRFinderto run multiple epochs (#2200) save_handlerautomatically detectsDiskSaverwhen path passed (#2198)- Improved 
Checkpointto usescore_nameas metric's key (#2146) - Added 
Stateparameter scheduler (#2090) - Added state attributes for loggers (tqdm, Polyaxon, MLFlow, WandB, Neptune, Tensorboard, Visdom, ClearML) (#2162, #2161, #2160, #2154, #2153, #2152, #2151, #2148, #2140, #2137)
 - Added gradient accumulation to supervised training step functions (#2223)
 - Automatic jupyter environment detection (#2188)
 - Added an additional argument to 
auto_optimto allow gradient accumulation (#2169) - Added micro averaging for Bleu Score (#2179)
 - Expanded BLEU, ROUGE to be calculated on batch input (#2259, #2180)
 - Moved 
BasicTimeProfiler,HandlersTimeProfiler,ParamScheduler,LRFinderto core (#2136, #2135, #2132) 
Bug fixes
- Fixed docstring examples with huge bottom padding (#2225)
 - Fixed NCCL warning caused by barrier if using idist (#2257, #2254)
 - Fixed hostname list expansion (#2208, #2204)
 - Fixed tcp error with PyTorch v1.9.1 (#2211)
 
Housekeeping (docs, CI, examples, tests, etc)
- #2243, #2242, #2228, #2164, #2222, #2221, #2220, #2219, #2218, #2217, #2216, #2173, #2164, #2207, #2236, #2190, #2256, #2196, #2177, #2166, #2155, #2149, #2234, #2206, #2186, #2176, #2246, #2231, #2182, #2192, #2165, #2227, #2253, #2247, #2250, #2226, #2201, #2184, #2142, #2232, #2238, #2174
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Chandan-h-509, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @fco-dv, @gucifer, @kennethleungty, @logankilpatrick, @mfoglio, @sandylaker, @sdesrozis, @theory-in-progress, @toxa23, @trsvchn, @vfdev-5, @ydcjeff
FID/IS metrics for GANs, EMA handler and bug fixes
PyTorch-Ignite 0.4.6 - Release Notes
New Features
- Added 
start_lroption toFastaiLRFinder(#2111) - Added Model's EMA handler (#2098, #2102)
 - Improved SLURM support: added hostlist expansion without using 
scontrol(#2092) 
Metrics
- Added Inception Score (#2053)
 - Added FID metric (#2049, #2061, #2085, #2094, #2103)
- Blog post "GAN Evaluation : the Frechet Inception Distance and Inception Score metrics" (https://pytorch-ignite.ai/posts/gan-evaluation-with-fid-and-is/)
 
 - Improved DDP support for metrics (#2096, #2083)
 - Improved 
MetricsLambdato work withreset/update/computeAPI (#2091) 
Bug fixes
- Modified 
auto_dataloaderto not wrap user providedDistributedSampler(#2119) - Raise error in 
DistributedProxySamplerwhen sampler is already aDistributedSampler(#2120) - Improved LRFinder error message (#2127)
 - Added 
py.typedfor type checkers (#2095) 
Housekeeping
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@01-vyom, @KickItLikeShika, @gucifer, @sandylaker, @schuhschuh, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff
New metrics, extended DDP support and bug fixes
PyTorch-Ignite 0.4.5 - Release Notes
New Features
Metrics
- Added BLEU metric (#1834)
 - Added ROUGE metric (#1772)
 - Added MultiLabelConfusionMatrix metric (#1613)
 - Added Cohen Kappa metric (#1690)
 - Extended 
sync_all_reduceAPI (#1823) - Made 
EpochMetricmore generic by extending the list of valid types (#1748) - Fixed issue with metric's output device (#2062)
 - Added support for list of tensors as metric input (#2055)
 - Implemented Jaccard Index shortcut for metrics (#1682)
 - Updated Loss metric to use 
required_output_keys(#2027) - Added classification report metric (#1887)
 - Added output detach for Canberra metric (#1820)
 - Improved ROC AUC (#1762)
 - Improved AveragePrecision metric and tests (#1756)
 - Uniformly handling of metric types for all loggers (#2021)
 - More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)
 
Engine
- Added native 
torch.cuda.ampandapexautomatic mixed precision forcreate_supervised_trainerandcreate_supervised_evaluator(#1714, #1589) - Updated 
state.batch/state.outputlifespan in Engine (#1919) 
Distributed module
- Handled IterableDataset with 
auto_dataloader(#2028) - Updated Loss metric to use 
required_output_keys(#2027) - Enabled gpu support for gloo backend (#2016)
 - Added 
safe_modeforidistbroadcast (#1839) - Improved 
idistto support differentinit_methods(#1767) 
Other improvements
- Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
 - Moved param handler to core (#1988)
 - Added an option to store 
EpochOutputStoredata onengine.state, moved to core (#1982, #1974) - Set seed for xla in 
ignite.utils.manual_seed(#1970) - Fixed case for Precision/Recall in 
multi_label, not averaged configuration for DDP (#1646) - Updated 
PolyaxonLoggerto handle v1 and v0 (#1625) - Added Arguments 
*args,**kwargstoBaseLogger.attach method(#2034) - Enabled metric ordering on 
ProgressBar(#1937) - Updated wandb logger (#1896)
 - Fixed type hint for 
ProgressBar(#2079) 
Bug fixes
- BC-breaking: Improved loggers to keep configuration (#1945)
 - Fixed warnings in CI (#2023)
 - Fixed Precision for all zero predictions (#2017)
 - Renamed the default logger (#2006)
 - Fixed Accumulation metric with Nvidia/Apex (#1978)
 - Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
 - Updated 
nltk-smooth2for BLEU metric (#1911) - Added full read permissions to saved file (1876) (#1880)
 - Fixed a bug with horovod 
_do_manual_all_reduce(#1848) - Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
 - Fixed f-string in 
mnist_save_resume_engine.pyexample (#2077) - Fixed an issue when rng states accidentaly on cuda for 
DeterministicEngine(#2081) 
Housekeeping
A lot of PRs
- Test improvements (#2061, #2057, #2047, #1962, #1957, #1946, #1943, #1928, #1927, #1915, #1914, #1908, #1906, #1905, #1903, #1902, #1899, #1899, #1882, #1870, #1866, #1860, #1846, #1832, #1828, #1821, #1816, #1815, #1814, #1812, #1811, #1809, #1808, #1807, #1804, #1802, #1801, #1799, #1798, #1797, #1796, #1795, #1793, #1791, #1785, #1784, #1783, #1781, #1776, #1774, #1769, #1768, #1760, #1755, #1746, #1741, #1718, #1717, #1713, #1631)
 - Documentation improvements and updates (#2058, #2024, #2005, #2003, #2001, #1993, #1990, #1933, #1893, #1849, #1780, #1770, #1727, #1726, #1722, #1686, #1685, #1672, #1671, #1661)
 - Example improvements (#1924, #1918, #1890, #1827, #1771, #1669, #1658, #1656, #1652, #1642, #1633, #1632)
 - CI updates (#2075, #2070, #2069, #2068, #2067, #2064, #2044, #2039, #2037, #2023, #1985, #1979, #1940, #1907, #1892, #1888, #1878, #1877, #1873, #1867, #1861, #1847, #1841, #1838, #1837, #1835, #1831, #1818, #1773, #1764, #1761, #1759, #1752, #1745, #1743, #1742, #1739, #1738, #1736, #1724, #1706, #1705, #1667, #1664, #1647)
 - Code style improvements (#2050, #2014, #1817, #1749, #1747, #1740, #1734, #1732, #1731, #1707, #1703)
 - Added docker image test script (#1733)
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff
Bug fixes and docs improvements
PyTorch-Ignite 0.4.4 - Release Notes
Bug fixes:
- BC-breaking Moved detach outside of loss function computation (#1675, #1692)
 - Added eps to avoid nans in canberra error (#1699)
 - Removed size limitation for str on collective ops (#1702)
 - Fixed imports in docker images and now install Pillow-SIMD (#1638, #1639, #1628, #1711)
 
Doc improvements
Other improvements
- Fixed artifacts urls for pypi (#1629)
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Devanshu24, @KickItLikeShika, @Moh-Yakoub, @OBITORASU, @ahmedo42, @fco-dv, @sparkingdark, @touqir14, @trsvchn, @vfdev-5, @y0ast, @ydcjeff
New features, better docs, dropped python 3.5
PyTorch-Ignite 0.4.3 - Release Notes
🎉 Since september we have a new logo (#1324) 🎉
Core
Metrics
- [BC-breaking] Made Metrics accumulate values on device specified by user (#1238)
 - Fixes BC if custom metric returns a dict (#1478)
 - Added PSNR metric (#1570, #1595)
 
Handlers
- Checkpoint can save model with same filename (#1423)
 - Add 
greater_or_equaloption to Checkpoint handler (#1597) - Update handlers to use setup_logger (#1617)
 - Added TimeLimit handler (#1611)
 
Distributed helper module
- Distributed cpu tests on windows (#1429)
 - Added kwargs to idist.auto_model (#1552)
 - Improved horovod initializer (#1559)
 
Others
- Dropped python 3.5 support (#1500)
 - Added 
torch.cuda.manual_seed_alltoignite.utils.manual_seed(#1444) - Fixed 
to_onehotfunction to be torch scriptable (#1592) - Introduced standard stream for logger setup helper (#1601)
 
Docker images
- Removed Entrypoint from Dockerfile and images (#1475)
 
Examples
- Added [Cifar10 QAT example](https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10_qat (#1556)
 
Contrib
Metrics
- Improved Canberra metric for DDP (#1314)
 - Improve ManhattanDistance metric for DDP (#1320)
 - Improve R2Score metric for DDP (#1318)
 
Handlers
- Added new time profiler 
HandlersTimeProfilerwhich allows per handler time profiling (#1398, #1474) - Fixed 
attach_opt_params_handlerto returnRemovableEventHandle(#1502) - Renamed 
TrainsLoggertoClearMLLoggerkeeping BC (#1557, #1560) 
Documentation improvements
- #1330, #1337, #1338, #1353, #1360, #1374, #1373, #1394, #1393, #1401, #1435, #1460, #1461, #1465, #1536, #1542 ...
 - Update Shpinx to v3.2.1. (#1356, #1372)
 
Codebase is MyPy checked
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@1nF0rmed, @Amab, @BanzaiTokyo, @Devanshu24, @Nic-Ma, @RaviTezu, @SamuelMarks, @abdulelahsm, @afzal442, @ahmedo42, @dgarth, @fco-dv, @gruebel, @harsh8398, @ibotdotout, @isabela-pf, @jkhenning, @josselineperdomo, @jrieke, @n2cholas, @ramesht007, @rzats, @sdesrozis, @shngt, @sroy8091, @theodumont, @thescripted, @timgates42, @trsvchn, @uribgp, @vcarpani, @vfdev-5, @ydcjeff, @zhxxn
Improved distributed support (horovod framework, epoch-wise metrics, etc), new metrics/handlers, bug fixes and pre-built docker images.
PyTorch-Ignite 0.4.2 - Release Notes
Core
New Features and bug fixes
- 
Added SSIM metric (#1217)
 - 
Added prebuilt Docker images (#1218)
 - 
Added distributed support for
EpochMetricand related metrics (#1229) - 
Added
required_output_keyspublic attribute (#1291) - 
Pre-built docker images for computer vision and nlp tasks
powered with Nvidia/Apex, Horovod, MS DeepSpeed (#1304 #1248 #1218 ) 
Handlers and utils
- Allow passing keyword arguments to save function on 
Checkpoint(#1245) 
Distributed helper module
- Added support of Horovod (#1195)
 - Added 
idist.broadcast(#1237) - Added 
sync_bnoption toidist.auto_model(#1265) 
Contrib
New Features and bug fixes
- Added 
EpochOutputStorehandler (#1226) - Improved displayed tag for tqdm progress bar (#1279)
 - Fixed bug with 
ParamGroupSchedulerwith schedulers based on different optimizers (#1274) 
And a lot of house-keeping Pre-September Hacktoberfest contributions
- Added initial Mypy check at CI step (#1296)
 - Fixed typo in docs (concepts) (#1295)
 - Fixed link to pytorch documents (#1294)
 - Removed prints from tests (#1292)
 - Downgraded tqdm version to stabilize the CI (#1293)
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@M3L6H, @Tawishi, @WrRan, @ZhiliangWu, @benji011, @fco-dv, @kamahori, @kenjihiraoka, @kilsenp, @n2cholas, @nzare, @sdesrozis, @theodumont, @vfdev-5, @ydcjeff,
Bugfixes and updates
PyTorch-Ignite 0.4.1 - Release Notes
Core
New Features and bug fixes
- Improved docs for custom events (#1179)
 
Handlers and utils
- Added custom filename pattern for saving checkpoints (#1127)
 
Distributed helper module
- Improved namings in _XlaDistModel (#1173)
 - Minor optimization for 
idist.get_*methods (#1196) - Fixed distributed proxy sampler runtime error (#1192)
 - Fixes bug using 
idistwith "nccl" backend and torch cuda is not available (#1166) - Fixed issue with logging XLA tensors (#1207)
 
Contrib
New Features and bug fixes
- Fixes warning about "TrainsLogger output_handler can not log metrics value" (#1170)
 - Improved usage of contrib common methods with other save handlers (#1171)
 
Examples
- Improved Pascal Voc example (#1193)
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Joel-hanson, @WrRan, @jspisak, @marload, @ryanwongsa, @sdesrozis, @vfdev-5
Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices
PyTorch-Ignite 0.4.0 - Release Notes
Core
BC breaking changes
- Simplified engine - BC breaking change (#940 #939 #938)
- no more internal patching of torch DataLoader.
 - seed argument of 
Engine.runis deprecated. - previous behaviour can be achieved with 
DeterministicEngine, introduced in #939. 
 - Make all 
EventsbeCallableEventsWithFilter(#788). - Make ignite compatible only with pytorch >=1.3 (#1016, #1150).
- ignite is tested on the latest and nightly versions of pytorch.
 - exact compatibility with previous versions can be checked here.
 
 - Remove deprecated arguments from 
BaseLogger(#1051). - Deprecated 
CustomPeriodicEvent(#984). RunningAveragenow computes output quantity average instead of a sum in DDP (#991).- Checkpoint stores now files with 
.ptextension instead of.pth(#873). - Arguments 
archivedofCheckpointandModelCheckpointare deprecated (#873). - Now 
create_supervised_trainerandcreate_supervised_evaluatordo not move model to device (#910). 
See also migration note for details on how to update your code.
New Features and bug fixes
Ignite Distributed [Experimental]
- Introduction of 
ignite.distributed as idistmodule (#1045)- common interface for distributed applications and helper methods, e.g. 
get_world_size(),get_rank(), ... - supports native torch distributed configuration, XLA devices.
 - metrics computation works in all supported distributed configurations: GPUs and TPUs.
 Parallelutility andautomodule (#1014).
 - common interface for distributed applications and helper methods, e.g. 
 
Engine & Events
- Add flexibility on event handlers by packing triggering events (#868).
 Engineargument is now optional in event handlers (#889, #919).- We initialize 
engine.statebefore callingengine.run(#1028). Enginecan run on dataloader based onIterableDatasetand without specifyingepoch_length(#1077).- Added user keys into Engine's state dict (#914).
 - Bug fixes in 
Engineclass (#1048, #994). - Now 
epoch_lengthargument is optional (#985)- suitable to work with finite-unknown-length iterators.
 
 - Added times in 
engine.state(#958). 
Metrics
- Add 
Frequencymetric for ops/s calculations (#760, #783, #976). - Metrics computation can be customized with introduced 
MetricUsage(#979, #1054)- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
 
 Metriccan be detached (#827).- Fixed bug in 
RunningAveragewhen output is torch tensor (#943). - Improved computation performance of 
EpochMetric(#967). - Fixed average recall value of 
ConfusionMatrix(#846). - Now metrics can be serialized using 
dill(#930). - Added support for nested metric values (#968).
 
Handlers and utils
- Checkpoint : improved filename when score value is Integer (#758).
 - Checkpoint : fix returning worst model of the saved models. (#745).
 - Checkpoint : 
load_objectscan load single object checkpoints (#772). - Checkpoint : we now save only one checkpoint per priority (#847).
 - Checkpoint : added kwargs to 
Checkpoint.load_objects(#861). - Checkpoint : now saves 
model.module.state_dict()for DDP and DP (#1086). - Checkpoint and related: other improvements (#937).
 - Checkpoint and EarlyStopping become stateful (#1156)
 - Support namedtuple for 
convert_tensor(#740). - Added decorator 
one_rank_only(#882). - Update 
common.py(#904). 
Contrib
- Added 
FastaiLRFinder(#596). 
Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (#875).
 
Parameters scheduling
- Enabled multi params group for 
LRScheduler(#1027). - Parameters scheduling improvements (#1072, #859).
 - Parameters scheduler can work on torch optimizer and any object with attribute 
param_groups(#1163). 
Support of experiment tracking systems
- Add 
NeptuneLogger(#730, #821, #951, #954). - Add 
TrainsLogger(#1020, #1036, #1043). - Add 
WandbLogger(#926). - Added 
visdom_loggerto common module (#796). - TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
 - Simplified 
BaseLoggerattach APIs (#1006). - Added kwargs to loggers' constructors and respective setup functions (#1015).
 
Time profiling
- Added basic time profiler to 
contrib.handlers(#729). 
Bug fixes (some of PRs)
ProgressBaroutput not in sync with epoch counts (#773).- Fixed 
ProgressBar.log_message(#768). Progressbarnow accounts forepoch_lengthargument (#785).- Fixed broken 
ProgressBarif data is iterator without epoch length (#995). - Improved 
setup_loggerfor multiple calls (#962). - Fixed incorrect log position (#1099).
 - Added missing colon to logging message (#1101).
 - Fixed order of checkpoint saving and candidate removal (#1117)
 
Examples
- Basic example of 
FastaiLRFinderon MNIST (#838). - CycleGAN auto-mixed precision training example with NVidia/Apex or native 
torch.cuda.amp(#888). - Added 
setup_loggerto mnist examples (#953). - Added MNIST example on TPU (#956).
 - Benchmark amp on Cifar100 (#917).
 - Updated ImageNet and Pascal VOC12 examples (#1125 #1138)
 
Housekeeping
- Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092, ...).
 - Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093, #1113, ...).
 - Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058, ...).
 - Added 
Serializablein mixins (#1000). - Merge of 
EpochMetricin_BaseRegressionEpoch(#970). - Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
 - Drop Python 2 support finalized (#806).
 - Splits engine into multiple parts (#724).
 - Add Python 3.8 to Conda builds (#781).
 - Black formatted codebase with pre-commit files (#792).
 - Activate dpl v2 for Travis CI (#804).
 - AutoPEP8 (#805).
 - Fixed device conversion method (#887).
 - Refactored deps installation (#931).
 - Return handler in helpers (#997).
 - Fixes #833 (#1001).
 - Disable propagation of loggers to ancestrors (#1013).
 - Consistent PEP8-compliant imports layout (#901).
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @amatsukawa @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
Simplified Engine. Enhanced support for distributed configuration on GPUs, XLA devices
PyTorch-Ignite 0.4.0 RC - Release Notes
Core
BC breaking changes
- Simplified engine - BC breaking change (#940 #939 #938)
- no more internal patching of torch DataLoader.
 - seed argument of 
Engine.runis deprecated. - previous behaviour can be achieved with 
DeterministicEngine, introduced in #939. 
 - Make all 
EventsbeCallableEventsWithFilter(#788). - Make ignite compatible only with pytorch >1.0 (#1016).
- ignite is tested on the latest and nightly versions of pytorch.
 - exact compatibility with previous versions can be checked here.
 
 - Remove deprecated arguments from 
BaseLogger(#1051). - Deprecated 
CustomPeriodicEvent(#984). RunningAveragenow computes output quantity average instead of a sum in DDP (#991).- Checkpoint stores now files with 
.ptextension instead of.pth(#873). - Arguments 
archivedofCheckpointandModelCheckpointare deprecated (#873). - Now 
create_supervised_trainerandcreate_supervised_evaluatordo not move model to device (#910). 
New Features and bug fixes
Ignite Distributed [Experimental]
- Introduction of 
ignite.distributed as idistmodule (#1045)- common interface for distributed applications and helper methods, e.g. 
get_world_size(),get_rank(), ... - supports native torch distributed configuration, XLA devices.
 - metrics computation works in all supported distributed configurations: GPUs and TPUs.
 
 - common interface for distributed applications and helper methods, e.g. 
 
Engine & Events
- Add flexibility on event handlers by packing triggering events (#868).
 Engineargument is now optional in event handlers (#889, #919).- We initialize 
engine.statebefore callingengine.run(#1028). Enginecan run on dataloader based onIterableDatasetand without specifyingepoch_length(#1077).- Added user keys into Engine's state dict (#914).
 - Bug fixes in 
Engineclass (#1048, #994). - Now 
epoch_lengthargument is optional (#985)- suitable to work with finite-unknown-length iterators.
 
 - Added times in 
engine.state(#958). 
Metrics
- Add 
Frequencymetric for ops/s calculations (#760, #783, #976). - Metrics computation can be customized with introduced 
MetricUsage(#979, #1054)- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
 
 Metriccan be detached (#827).- Fixed bug in 
RunningAveragewhen output is torch tensor (#943). - Improved computation performance of 
EpochMetric(#967). - Fixed average recall value of 
ConfusionMatrix(#846). - Now metrics can be serialized using 
dill(#930). - Added support for nested metric values (#968).
 
Handlers and utils
- Checkpoint : improved filename when score value is Integer (#758).
 - Checkpoint : fix returning worst model of the saved models. (#745).
 - Checkpoint : 
load_objectscan load single object checkpoints (#772). - Checkpoint : we now save only one checkpoint per priority (#847).
 - Checkpoint : added kwargs to 
Checkpoint.load_objects(#861). - Checkpoint : now saves 
model.module.state_dict()for DDP and DP (#1086). - Checkpoint and related: other improvements (#937).
 - Support namedtuple for 
convert_tensor(#740). - Added decorator 
one_rank_only(#882). - Update 
common.py(#904). 
Contrib
- Added 
FastaiLRFinder(#596). 
Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (#875).
 
Parameters scheduling
- Enabled multi params group for 
LRScheduler(#1027). - Parameters scheduling improvements (#1072, #859).
 
Support of experiment tracking systems
- Add 
NeptuneLogger(#730, #821, #951, #954). - Add 
TrainsLogger(#1020, #1036, #1043). - Add 
WandbLogger(#926). - Added 
visdom_loggerto common module (#796). - TensorboardX is no longer mandatory if pytorch>=1.2 (#858).
 - Simplified 
BaseLoggerattach APIs (#1006). - Added kwargs to loggers' constructors and respective setup functions (#1015).
 
Time profiling
- Added basic time profiler to 
contrib.handlers(#729). 
Bug fixes (some of PRs)
ProgressBaroutput not in sync with epoch counts (#773).- Fixed 
ProgressBar.log_message(#768). Progressbarnow accounts forepoch_lengthargument (#785).- Fixed broken 
ProgressBarif data is iterator without epoch length (#995). - Improved 
setup_loggerfor multiple calls (#962). - Fixed incorrect log position (#1099).
 - Added missing colon to logging message (#1101).
 
Examples
- Basic example of 
FastaiLRFinderon MNIST (#838). - CycleGAN auto-mixed precision training example with NVidia/Apex or native 
torch.cuda.amp(#888). - Added 
setup_loggerto mnist examples (#953). - Added MNIST example on TPU (#956).
 - Benchmark amp on Cifar100 (#917).
 TrainsLoggersemantic segmentation example (#1095).
Housekeeping (some of PRs)
- Documentation updates (#711, #727, #734, #736, #742, #743, #759, #798, #780, #808, #817, #826, #867, #877, #908, #909, #911, #928, #942, #986, #989, #1002, #1031, #1035, #1083, #1092).
 - Offerings to the CI gods (#713, #761, #762, #776, #791, #801, #803, #879, #885, #890, #894, #933, #981, #982, #1010, #1026, #1046, #1084, #1093).
 - Test improvements (#779, #807, #854, #891, #975, #1021, #1033, #1041, #1058).
 - Added 
Serializablein mixins (#1000). - Merge of 
EpochMetricin_BaseRegressionEpoch(#970). - Adding typing to ignite (#716, #751, #800, #844, #944, #1037).
 - Drop Python 2 support finalized (#806).
 - Dynamic typing (#723).
 - Splits engine into multiple parts (#724).
 - Add Python 3.8 to Conda builds (#781).
 - Black formatted codebase with pre-commit files (#792).
 - Activate dpl v2 for Travis CI (#804).
 - AutoPEP8 (#805).
 - Fixes nightly version bug (#809).
 - Fixed device conversion method (#887).
 - Refactored deps installation (#931).
 - Return handler in helpers (#997).
 - Fixes #833 (#1001).
 - Disable propagation of loggers to ancestrors (#1013).
 - Consistent PEP8-compliant imports layout (#901).
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
@Crissman, @DhDeepLIT, @GabrielePicco, @InCogNiTo124, @itamarwilf, @joxis, @Muhamob, @Yevgnen, @anmolsjoshi, @bendboaz, @bmartinn, @cajanond, @chm90, @cqql, @czotti, @erip, @fdlm, @hoangmit, @Isolet, @jakubczakon, @jkhenning, @kai-tub, @maxfrei750, @michiboo, @mkartik, @sdesrozis, @sisp, @vfdev-5, @willfrey, @xen0f0n, @y0ast, @ykumards
Bye-Bye Python 2.7, Welcome 3.8
Core
- Added State repr and input batch as engine.state.batch (#641)
 - Adapted core metrics only to be used in distributed configuration (#635)
 - Added fbeta metric as core metric (#653)
 - Added event filtering feature (e.g. every/once/event filter logic) (#656)
 - BC breaking change: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (#673)
- Added option 
n_saved=Noneto store all checkpoints (#703) 
 - Added option 
 - Improved accumulation metrics (#681)
 - Early stopping min delta (#685)
 - Droped Python 2.7 support (#699)
 - Added feature: Metric can accept a dictionary (#689)
 - Added Dice Coefficient metric (#680)
 - Added helper method to simplify the setup of class loggers (#712)
 
Engine refactoring (BC breaking change)
Finally solved the issue #62 to resume training from an epoch or iteration
- Engine refactoring + features (#640)
- engine checkpointing
 - variable epoch lenght defined by 
epoch_length - two additional events: 
GET_BATCH_STARTEDandGET_BATCH_COMPLETED - cifar10 example with save/resume in distributed conf
 
 
Contrib
- Improved 
create_lr_scheduler_with_warmup(#646) - Added helper method to plot param scheduler values with matplotlib (#650)
 - BC Breaking change: with multiple optimizer's param groups (#690)
- Added state_dict/load_state_dict (#690)
 
 - BC Breaking change: Let the user specify tqdm parameters for log_message (#695)
 
Examples
- Added an example of hyperparameters tuning with Ax on CIFAR10 (#652)
 - Added CIFAR10 distributed example
 
Reproducible trainings as "References"
Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:
Features:
- Distributed training with mixed precision by nvidia/apex
 - Experiments tracking with MLflow or Polyaxon
 
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):