|
1 | | -.. lightning-pose documentation master file, created by |
2 | | - sphinx-quickstart on Thu Nov 9 13:15:31 2023. |
3 | | - You can adapt this file completely to your liking, but it should at least |
4 | | - contain the root `toctree` directive. |
| 1 | +.. rst-class:: home-display-none |
| 2 | + |
| 3 | +Lightning Pose Homepage |
| 4 | +======================== |
5 | 5 |
|
6 | 6 | .. image:: images/LightningPose_horizontal_light.png |
7 | 7 |
|
8 | | -Welcome to the Lightning Pose documentation |
9 | | -=========================================== |
10 | 8 |
|
11 | | -Lightning Pose is an open source deep learning package for animal pose estimation |
12 | | -(`Biderman, Whiteway et al. 2024, Nature Methods <https://rdcu.be/dLP3z>`_). |
13 | | -The framework is based on Pytorch Lightning and supports accelerated training on unlabeled videos |
14 | | -using NVIDIA DALI. Models can be evaluated with TensorBoard and Streamlit. |
15 | | -We also offer a suite of tools for multi-camera pose estimation. |
| 9 | +.. meta:: |
| 10 | + :description: Accessible documentation for animal pose estimation. |
| 11 | + |
| 12 | +.. raw:: html |
| 13 | + |
| 14 | + <div style="text-align: center; margin-top: 1em; margin-bottom: 2em;"> |
| 15 | + <p style="font-size: 1.2em;">An end-to-end toolkit for robust multi-view animal pose estimation.</p> |
| 16 | + </div> |
| 17 | + |
| 18 | +.. grid:: 1 1 3 3 |
| 19 | + :gutter: 3 |
| 20 | + :class-container: feature-card-container |
| 21 | + |
| 22 | + .. grid-item-card:: 🛰️ Multi-View |
| 23 | + :class-card: feature-toggle-card |
| 24 | + |
| 25 | + Multiview transformers and patch masking for robust 3D tracking. |
| 26 | + |
| 27 | + .. grid-item-card:: 🎬 Single-View |
| 28 | + :class-card: feature-toggle-card |
| 29 | + |
| 30 | + Temporal context networks that learn from unlabeled video. |
| 31 | + |
| 32 | + .. grid-item-card:: ☁️ Cloud Ready |
| 33 | + :class-card: feature-toggle-card |
| 34 | + |
| 35 | + Browser-based labeling and training on headless GPU servers. |
| 36 | + |
| 37 | +.. rst-class:: multi-view-section |
| 38 | + |
| 39 | +Multi-view Capabilities |
| 40 | +------------------------ |
| 41 | + |
| 42 | +* **Multi-View Transformer (MVT):** A unified architecture that enables simultaneous processing of information across all camera views through early-feature fusion. |
| 43 | +* **Patch Masking:** A novel training scheme that masks random image patches to force the model to learn robust cross-view correspondences. |
| 44 | +* **Geometric Consistency:** For calibrated setups, the framework incorporates 3D triangulation losses and geometrically-aware 3D data augmentation. |
| 45 | +* **Variance Inflation:** An advanced technique for outlier detection that identifies geometrically inconsistent predictions. |
| 46 | + |
| 47 | +.. rst-class:: single-view-section |
| 48 | + |
| 49 | +Single-view Capabilities |
| 50 | +------------------------- |
| 51 | + |
| 52 | +* **Temporal Context Networks:** Utilizes information from surrounding frames to resolve anatomical ambiguities and maintain tracking through brief occlusions. |
| 53 | +* **Unsupervised Learning:** Employs training objectives that penalize predictions for violating physical laws. |
| 54 | +* **Pretrained Backbone Support:** Optimized to work with generic, off-the-shelf Vision Transformer (ViT) backbones. |
| 55 | + |
| 56 | +.. rst-class:: cloud-section |
| 57 | + |
| 58 | +Cloud Application & Workflow |
| 59 | +----------------------------- |
| 60 | + |
| 61 | +* **Cloud & Headless Compatibility:** A browser-based interface designed for local or cloud deployment. |
| 62 | +* **Multi-view Labeling:** A specialized annotation tool that streamlines the labeling process by using camera calibration. |
| 63 | +* **Unified Multi-view Viewer:** Integrated visualization tools to inspect and compare predictions across all camera views simultaneously. |
| 64 | + |
| 65 | +.. raw:: html |
| 66 | + |
| 67 | + <script> |
| 68 | + document.addEventListener("DOMContentLoaded", function() { |
| 69 | + const cards = document.querySelectorAll('.feature-toggle-card'); |
| 70 | + const detailSections = [ |
| 71 | + document.querySelector('.multi-view-section'), |
| 72 | + document.querySelector('.single-view-section'), |
| 73 | + document.querySelector('.cloud-section') |
| 74 | + ]; |
| 75 | +
|
| 76 | + function showSection(index) { |
| 77 | + // Hide all sections and remove active styling from cards |
| 78 | + detailSections.forEach((sec, i) => { |
| 79 | + if (sec) sec.style.display = 'none'; |
| 80 | + cards[i].style.border = '1px solid var(--sd-color-card-border)'; |
| 81 | + cards[i].style.backgroundColor = 'transparent'; |
| 82 | + }); |
| 83 | +
|
| 84 | + // Show the selected section |
| 85 | + if (detailSections[index]) { |
| 86 | + detailSections[index].style.display = 'block'; |
| 87 | + // Add active styling to card |
| 88 | + cards[index].style.border = '2px solid #3498db'; |
| 89 | + cards[index].style.backgroundColor = 'rgba(52, 152, 219, 0.05)'; |
| 90 | + } |
| 91 | + } |
| 92 | +
|
| 93 | + // Event listeners for clicks |
| 94 | + cards.forEach((card, index) => { |
| 95 | + card.style.cursor = 'pointer'; |
| 96 | + card.addEventListener('click', () => showSection(index)); |
| 97 | + card.addEventListener('mouseenter', () => showSection(index)); |
| 98 | + card.addEventListener('touchstart', () => showSection(index)); |
| 99 | +
|
| 100 | + }); |
| 101 | +
|
| 102 | + // Select the first card by default |
| 103 | + showSection(0); |
| 104 | + }); |
| 105 | + </script> |
| 106 | + |
| 107 | + <style> |
| 108 | + /* Enhance hover effect */ |
| 109 | + .feature-toggle-card:hover { |
| 110 | + transform: translateY(-2px); |
| 111 | + transition: all 0.2s ease; |
| 112 | + box-shadow: 0 4px 12px rgba(0,0,0,0.1); |
| 113 | + } |
| 114 | + </style> |
| 115 | + |
| 116 | + |
| 117 | +-------- |
| 118 | + |
| 119 | +Read the papers |
| 120 | +---------------- |
16 | 121 |
|
17 | | -If you would like to try out Lightning Pose, we provide a |
18 | | -`Google Colab notebook <https://colab.research.google.com/github/paninski-lab/lightning-pose/blob/main/scripts/litpose_training_demo.ipynb>`_ |
19 | | -that steps through the process of training and evaluating a model on an example dataset |
20 | | -- no data labeling or software installation required! |
| 122 | +The original Nature Methods 2024 paper that introduced Lightning Pose for single-view pose estimation using semisupervised learning and ensemble kalman smoothing (EKS): |
21 | 123 |
|
22 | | -We also provide a |
23 | | -`browser-based GUI <https://github.com/Lightning-Universe/Pose-app>`_ |
24 | | -that supports the full life cycle of a pose estimation project, from data annotation to model |
25 | | -training to diagnostic visualizations. |
| 124 | +| **Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools** |
| 125 | +| Biderman, D., Whiteway, M. R., Hurwitz, C., et al. |
| 126 | +
|
| 127 | +.. grid:: |
| 128 | + :padding: 0 |
| 129 | + :margin: 2 0 0 0 |
| 130 | + |
| 131 | + .. grid-item:: |
| 132 | + .. button-link:: https://pmc.ncbi.nlm.nih.gov/articles/PMC12087009/ |
| 133 | + :color: primary |
| 134 | + :outline: |
| 135 | + |
| 136 | + *Nature Methods* 21, 1316–1328 (2024) |
| 137 | + |
| 138 | +The 2025 paper that added robust multiview support using multiview transformers (MVT), |
| 139 | +patch masking, 3d image augmentation and losses, and multiview EKS. |
| 140 | + |
| 141 | +| **An Uncertainty-Aware Framework for Data-Efficient Multi-View Animal Pose Estimation** |
| 142 | +| Aharon, L., Lee, K., et al. |
| 143 | +
|
| 144 | +.. grid:: |
| 145 | + :padding: 0 |
| 146 | + :margin: 2 0 0 0 |
| 147 | + |
| 148 | + .. grid-item:: |
| 149 | + .. button-link:: https://arxiv.org/abs/2510.09903 |
| 150 | + :color: primary |
| 151 | + :outline: |
| 152 | + |
| 153 | + arXiv Preprint (2025) |
| 154 | + |
| 155 | +.. rst-class:: section-multi-view |
| 156 | + |
| 157 | +-------- |
| 158 | + |
| 159 | + |
| 160 | +Get started with the app |
| 161 | +------------------------- |
| 162 | + |
| 163 | +The lightning pose app provides an easy-to-use GUI to access most lightning pose features. |
| 164 | + |
| 165 | +To get started, :doc:`install lightning pose <source/installation_guide>` |
| 166 | +and follow the :doc:`Create your first project <source/create_first_project>` tutorial. |
| 167 | +It covers the end-to-end workflow of labeling, training, and evaluation. |
26 | 168 |
|
27 | 169 | .. toctree:: |
28 | 170 | :maxdepth: 2 |
29 | | - :caption: Contents: |
| 171 | + :hidden: |
| 172 | + |
| 173 | + self |
30 | 174 |
|
31 | | - source/installation |
32 | | - source/user_guide/index |
| 175 | +.. toctree:: |
| 176 | + :maxdepth: 2 |
| 177 | + :hidden: |
| 178 | + :caption: Getting started |
| 179 | + |
| 180 | + source/installation_guide |
| 181 | + source/core_concepts |
| 182 | + source/create_first_project |
| 183 | + source/example_data |
| 184 | + source/importing_labeled_data |
| 185 | + |
| 186 | +.. toctree:: |
| 187 | + :maxdepth: 4 |
| 188 | + :hidden: |
| 189 | + :caption: CLI User guide |
| 190 | + |
| 191 | + source/user_guide_singleview/index |
33 | 192 | source/user_guide_multiview/index |
34 | 193 | source/user_guide_advanced/index |
| 194 | + |
| 195 | +.. toctree:: |
| 196 | + :maxdepth: 2 |
| 197 | + :hidden: |
| 198 | + :caption: Community |
| 199 | + |
35 | 200 | source/developer_guide/index |
36 | 201 | source/faqs |
37 | | - source/api |
38 | | - source/cli |
| 202 | + Release notes <https://github.com/paninski-lab/lightning-pose/releases> |
| 203 | + source/migrating_to_app |
39 | 204 |
|
40 | | -Indices and tables |
41 | | ------------------- |
| 205 | +.. toctree:: |
| 206 | + :maxdepth: 2 |
| 207 | + :hidden: |
| 208 | + :caption: Reference |
42 | 209 |
|
43 | | -* :ref:`genindex` |
44 | | -* :ref:`modindex` |
45 | | -* :ref:`search` |
| 210 | + source/user_guide_singleview/config_file |
| 211 | + source/api |
| 212 | + source/cli_reference/index |
0 commit comments