diff --git a/README.md b/README.md
index 00b6a7208..2fbcc3b96 100644
--- a/README.md
+++ b/README.md
@@ -6,7 +6,7 @@
-
+
@@ -14,8 +14,8 @@
Installation •
- Examples •
- Basics •
+ Examples •
+ Basics •
Cite us •
License
@@ -91,42 +91,35 @@ Everybody learns in different ways! Depending on your preferences, and what you
## Getting-started guides
-We have two separate series of notebooks which aims to teach you all you need to know to use DeepTrack to its fullest. The first is a set of six notebooks with a focus on the application.
+We have a set of four notebooks which aims to teach you all you need to know to use DeepTrack to its fullest with a focus on the application.
-1.
deeptrack_introduction_tutorial gives an overview of how to use DeepTrack 2.1.
-2.
tracking_particle_cnn_tutorial demonstrates how to track a point particle with a convolutional neural network (CNN).
-3.
tracking_particle_cnn_tutorial demonstrates how to track multiple particles using a U-net.
-4.
characterizing_aberrations_tutorial demonstrates how to add and characterize aberrations of an optical device.
-5.
distinguishing_particles_in_brightfield_tutorial demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.
-6.
analyzing_video_tutorial demonstrates how to create videos and how to train a neural network to analyze them.
+1.
deeptrack_introduction_tutorial Gives an overview of how to use DeepTrack 2.1.
+2.
tracking_particle_cnn_tutorial Demonstrates how to track a point particle with a convolutional neural network (CNN).
+3.
tracking_multiple_particles_unet_tutorial Demonstrates how to track multiple particles using a U-net.
+4.
distinguishing_particles_in_brightfield_tutorial Demonstrates how to use a U-net to track and distinguish particles of different sizes in brightfield microscopy.
-The second series focuses on individual topics, introducing them in a natural order.
-1.
Introducing how to create simulation pipelines and train models.
-2.
Demonstrating data generators.
-3.
Demonstrating how to customize models using layer-blocks.
## DeepTrack 2.1 in action
-Additionally, we have seven more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets
+Additionally, we have six more case studies which are less documented, but gives additional insight in how to use DeepTrack with real datasets
-1. [MNIST](examples/paper-examples/1-MNIST.ipynb) classifies handwritted digits.
-2. [single particle tracking](examples/paper-examples/2-single_particle_tracking.ipynb) tracks experimentally captured videos of a single particle. (Requires opencv-python compiled with ffmpeg to open and read a video.)
-3. [single particle sizing](examples/paper-examples/3-particle_sizing.ipynb) extracts the radius and refractive index of particles.
-4. [multi-particle tracking](examples/paper-examples/4-multi-molecule-tracking.ipynb) detects quantum dots in a low SNR image.
-5. [3-dimensional tracking](examples/paper-examples/5-inline_holography_3d_tracking.ipynb) tracks particles in three dimensions.
-6. [cell counting](examples/paper-examples/6-cell_counting.ipynb) counts the number of cells in fluorescence images.
-7. [GAN image generation](examples/paper-examples/7-GAN_image_generation.ipynb) uses a GAN to create cell image from masks.
+1. [Single Particle Tracking](examples/paper-examples/2-single_particle_tracking.ipynb) Tracks experimental videos of a single particle. (Requires opencv-python compiled with ffmpeg)
+2. [Multi-Particle tracking](examples/paper-examples/4-multi-molecule-tracking.ipynb) Detect quantum dots in a low SNR image.
+3. [Particle Feature Extraction](examples/paper-examples/3-particle_sizing.ipynb) Extract the radius and refractive index of particles.
+4. [Cell Counting](examples/paper-examples/6-cell_counting.ipynb) Count the number of cells in fluorescence images.
+5. [3D Multi-Particle tracking](examples/paper-examples/5-inline_holography_3d_tracking.ipynb)
+6. [GAN image generation](examples/paper-examples/7-GAN_image_generation.ipynb) Use a GAN to create cell image from masks.
## Model-specific examples
We also have examples that are specific for certain models. This includes
- [*LodeSTAR*](examples/LodeSTAR) for label-free particle tracking.
-- [*MAGIK*](deeptrack/models/gnns/) for graph-based particle linking and trace characterization.
+- [*MAGIK*](examples/MAGIK) for graph-based particle linking and trace characterization.
## Documentation
-The detailed documentation of DeepTrack 2.1 is available at the following link: https://softmatterlab.github.io/DeepTrack2/deeptrack.html
+The detailed documentation of DeepTrack 2.1 is available at the following link: [https://deeptrackai.github.io/DeepTrack2](https://deeptrackai.github.io/DeepTrack2)
## Video Tutorials
diff --git a/examples/MAGIK/readme.md b/examples/MAGIK/readme.md
new file mode 100644
index 000000000..cbd6f1612
--- /dev/null
+++ b/examples/MAGIK/readme.md
@@ -0,0 +1,66 @@
+# MAGIK
+
+MAGIK is a geometric deep learning approach for the analysis of dynamical properties from time-lapse microscopy.
+Here we provide the code as well as instructions to train models and to analyze experimental data.
+
+# Getting started
+
+## Installation from PyPi
+
+MAGIK requires at least python 3.6.
+
+To install MAGIK you must install the [Deeptrack](https://github.com/softmatterlab/DeepTrack-2.0) framework. Open a terminal or command prompt and run:
+
+ pip install deeptrack
+
+## Software requirements
+
+### OS Requirements
+
+MAGIK has been tested on the following systems:
+
+- macOS: Monterey (12.2.1)
+- Windows: 10 (64-bit)
+
+### Python Dependencies
+
+```
+tensorflow
+numpy
+matplotlib
+scipy
+Sphinx==2.2.0
+pydata-sphinx-theme
+numpydoc
+scikit-image
+tensorflow-probability
+pint
+pandas
+
+```
+
+If you have a very recent version of python, you may need to install numpy _before_ DeepTrack. This is a known issue with scikit-image.
+
+## It's a kind of MAGIK...
+
+To see MAGIK in action, we provide an [example](//github.com/softmatterlab/DeepTrack-2.0/blob/develop/examples/MAGIK/) based on live-cell migration experiments. Data courtesy of Sergi Masó Orriols, [the QuBI lab](https://mon.uvic.cat/qubilab/).
+
+## Cite us!
+
+If you use MAGIK in your project, please cite our article:
+
+```
+Jesús Pineda, Benjamin Midtvedt, Harshith Bachimanchi, Sergio Noé, Daniel Midtvedt, Giovanni Volpe, and Carlo Manzo
+"Geometric deep learning reveals the spatiotemporal fingerprint of microscopic motion."
+arXiv 2202.06355 (2022).
+https://arxiv.org/pdf/2202.06355.pdf
+```
+
+## Funding
+
+This work was supported by FEDER/Ministerio de Ciencia, Innovación y Universidades – Agencia Estatal de Investigación
+through the “Ram ́on y Cajal” program 2015 (Grant No. RYC-2015-17896) and the “Programa Estatal de I+D+i Orientada a los Retos de la Sociedad” (Grant No. BFU2017-85693-R); the Generalitat de Catalunya (AGAUR Grant No. 2017SGR940); the ERC Starting Grant ComplexSwimmers (Grant No. 677511); and the ERC Starting Grant MAPEI (101001267); the Knut and Alice Wallenberg Foundation (Grant No. 2019.0079).
+
+## License
+
+This project is covered under the **MIT License**.