You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PyTorch Lightning - William Falcon -
In this talk, William Falcon goes through the implementation details of the 10 most useful of these techniques, including DataLoaders, 16-bit precision, accumulated gradients and 4 different ways of distributing model training across hundreds of GPUs. We’ll also show how to use these already built-in in PyTorch Lightning, a Keras-like framework for ML researchers.
Converting from PyTorch to PyTorch Lightning -
In this video, William Falcon refactors a PyTorch VAE into PyTorch Lightning. As it's obvious in the video, this was an honest attempt at refactoring a new repository without having prior knowledge of it. Despite this, the full conversion took under 45 minutes.
Lightning Data Modules
In this video Nate Raw will walk you through how to make sharing and reusing data splits and transforms across projects easier with LightningDataModules.
Video lectures
From PyTorch to PyTorch Lightning
This video covers the magic of PyTorch Lightning! We convert the pure PyTorch classification model we created in the previous episode to PyTorch Lightning, which makes all the latest AI best practices trivial. We go over training on single and multi GPUs, logging and saving models, and many more!
Training a classification model on MNIST with PyTorch
This video covers how to create a PyTorch classification model from scratch! It introduces all the fundamental components like architecture definition, optimizer, loss function, data loader, and Alfredo's infamous 5 steps training! It shows you also how to train on a GPU, how to add residual connections, and how to use dropout to fight to overfit.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Videos
In this talk, William Falcon goes through the implementation details of the 10 most useful of these techniques, including DataLoaders, 16-bit precision, accumulated gradients and 4 different ways of distributing model training across hundreds of GPUs. We’ll also show how to use these already built-in in PyTorch Lightning, a Keras-like framework for ML researchers.
In this video, William Falcon refactors a PyTorch VAE into PyTorch Lightning. As it's obvious in the video, this was an honest attempt at refactoring a new repository without having prior knowledge of it. Despite this, the full conversion took under 45 minutes.
In this video Nate Raw will walk you through how to make sharing and reusing data splits and transforms across projects easier with LightningDataModules.
Video lectures
This video covers the magic of PyTorch Lightning! We convert the pure PyTorch classification model we created in the previous episode to PyTorch Lightning, which makes all the latest AI best practices trivial. We go over training on single and multi GPUs, logging and saving models, and many more!
This video covers how to create a PyTorch classification model from scratch! It introduces all the fundamental components like architecture definition, optimizer, loss function, data loader, and Alfredo's infamous 5 steps training! It shows you also how to train on a GPU, how to add residual connections, and how to use dropout to fight to overfit.
Beta Was this translation helpful? Give feedback.
All reactions