Skip to content

gallantlab/pymoten

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

218 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Welcome to pymoten!

Zenodo Github Codecov Python

What is pymoten?

pymoten is a python package that provides a convenient way to extract motion energy features from video using a pyramid of spatio-temporal Gabor filters [1] [2]. The filters are created at multiple spatial and temporal frequencies, directions of motion, x-y positions, and sizes. Each filter quadrature-pair is convolved with the video and their activation energy is computed for each frame. These features provide a good basis to model brain responses to natural movies [3] [4].

Installation

Using pip, install the latest version from git:

pip install git+https://github.com/gallantlab/pymoten.git

Or the most recent release:

pip install pymoten

Getting started

Example using synthetic data

import moten
import numpy as np

# Generate synthetic data
nimages, vdim, hdim = (100, 90, 180)
noise_movie = np.random.randn(nimages, vdim, hdim)

# Create a pyramid of spatio-temporal gabor filters
pyramid = moten.get_default_pyramid(vhsize=(vdim, hdim), fps=24)

# Compute motion energy features
moten_features = pyramid.project_stimulus(noise_movie)

Simple example using a video file

import moten

# Stream and convert the RGB video into a sequence of luminance images
video_file = 'http://anwarnunez.github.io/downloads/avsnr150s24fps_tiny.mp4'
luminance_images = moten.io.video2luminance(video_file, nimages=100)

# Create a pyramid of spatio-temporal gabor filters
nimages, vdim, hdim = luminance_images.shape
pyramid = moten.get_default_pyramid(vhsize=(vdim, hdim), fps=24)

# Compute motion energy features
moten_features = pyramid.project_stimulus(luminance_images)

Cite as

Nunez-Elizalde AO, Deniz F, Dupré la Tour T, Visconti di Oleggio Castello M, and Gallant JL (2021). pymoten: scientific python package for computing motion energy features from video. Zenodo. https://doi.org/10.5281/zenodo.6349625

References

[1]Adelson, E. H., & Bergen, J. R. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2(2), 284-299.
[2]Watson, A. B., & Ahumada, A. J. (1985). Model of human visual-motion sensing. Journal of the Optical Society of America A, 2(2), 322–342.
[3]Nishimoto, S., & Gallant, J. L. (2011). A three-dimensional spatiotemporal receptive field model explains responses of area MT neurons to naturalistic movies. Journal of Neuroscience, 31(41), 14551-14564.
[4]Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641-1646.

A MATLAB implementation can be found here.