Skip to content

Commit 8e8f723

Browse files
committed
Create README.md
1 parent ad76bb3 commit 8e8f723

File tree

1 file changed

+123
-0
lines changed

1 file changed

+123
-0
lines changed

README.md

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
# Transparent lazy data access for matlab
2+
3+
By default, [Matlab][1] matrices must be fully loaded into memory. This can make allocating and working with
4+
huge matrices a pain, especially if you only _really_ need access to a small portion of the matrix at a time.
5+
[`memmapfile`][2] allows the data for a matrix to be stored on disk, but you can't access the matrix transparently
6+
in functions that don't expect a [`memmapfile`][2] object without reading in the whole matrix. `MappedTensor` is
7+
a matlab class that looks like a simple matlab tensor, with all the data stored on disk.
8+
9+
A few extra niceties over [`memmapfile`][2] are included, such as built-in per-slice access; fast addition,
10+
subtraction, multiplication and division by scalars; fast negation; permution; complex support.
11+
12+
Tensor data is automatically allocated on disk in a temporary file, which is removed when all referencing
13+
objects are cleared. Existing binary files can also be accessed. `MappedTensor` is a handle class, which means
14+
that assigning an existing mapped tensor to another variable _will not_ make a copy, but both variables will point
15+
to the same data. Changing the data in one variable will change both variables.
16+
17+
MappedTensor internally uses `mex` functions, which need to be compiled the first time MappedTensor is used. If
18+
compilation fails then slower, non-mex versions will be used.
19+
20+
## Download and install
21+
22+
Download [MappedTensor][3]
23+
Unzip the `@MappedTensor` directory to somewhere on the [Matlab][1] path. The *@* ampersand symbol is important,
24+
as it signals to [Matlab][2] that this is a class directory.
25+
26+
## Creating a MappedTensor object
27+
28+
mtVariable = MappedTensor(vnTensorSize)
29+
mtVariable = MappedTensor(nDim1, nDim2, nDim3, ...)
30+
mtVariable = MappedTensor(strExistingFilename, ...)
31+
mtVariable = MappedTensor(..., 'Class', strClassName)
32+
33+
`vnTensorSize`, or [`nDim1 nDim2 nDim3 ...]` defines the desired size of the variable. By default, a new binary
34+
temporary file will be generated, and deleted when the `mtVariable` is destroyed. `strExistingFilename` can be
35+
used to map an existing file on disk, but the full size (and class) of the file must be known and specified in
36+
advance. This file will not be removed when all handle references are destroyed.
37+
38+
By default the tensor will have class `double`. This can be specified as an argument to `MappedTensor`. Supported
39+
classes: `char`, `int8`, `uint8`, `logical`, `int16`, `uint16`, `int32`, `uint32`, `single`, `int64`, `uint64`, `double`.
40+
41+
## Usage examples
42+
43+
size(mtVariable)
44+
mtVariable(:) = rand(100, 100, 100);
45+
mfData = mtVariable(:, :, 34, 2);
46+
mtVariable(12) = 1+6i;
47+
48+
**Note**: `mtVariable = rand(100, 100, 100);` would over-write the mapped tensor with a standard matlab tensor!
49+
To assign to the entire tensor you must use colon referencing: `mtVariable(:) = ...`
50+
51+
It's not clear why you would do this anyway, because the right hand side of the assignment would already allocate
52+
enough space for the full tensor... which is presumably what you're trying to avoid.
53+
54+
Permute is supported. Complex numbers are supported (a definite improvement over [`memmapfile`][2]). `transpose`
55+
(`A.'`) and `ctranspose` (`A'`) are both supported. Transposition just swaps the first two dimensions, leaving
56+
the trailing dimensions unpermuted.
57+
58+
Unary plus (`+A`) and minus (`-A`) are supported. Binary plus (`A+B`), minus (`A-B`), times (`A*B`, `A.*B`) as
59+
long as one of `A` or `B` is a scalar. Division (`A/B`, `A./B`, `B/A`, `B./A`) is supported, as long as `B` is a scalar.
60+
61+
Save and load is minimally supported — data is _not_ saved, but on load a new mapped tensor will be generated and
62+
filled with zeros. Both save and load generate warnings.
63+
64+
Dot referencing (`A.something`) is not supported.
65+
66+
`sum(mtVar <, nDimension>)` is implemented internally, to avoid having to read the entire tensor into memory.
67+
68+
## Convenience methods
69+
70+
`SliceFunction`: Execute a function on the entire tensor, by slicing it along a specified dimension, and store the
71+
results back in the tensor.
72+
73+
Usage: [`<mtnewvar>] = SliceFunction(mtVar, fhFunctionHandle, nSliceDim <, vnSliceSize,> ...)`
74+
75+
`mtVar` is a MappedTensor. This tensor will be sliced up along dimensions `nSliceDim`, with each slice passed
76+
individually to `fhFunctionHandle`, along with any trailing argments (`...`). If no return argument is supplied, the
77+
results will be stored back in `mtVar`. If a return argument is supplied, a new `MappedTensor` will be created to
78+
contain the results. The optional argument `vnSliceSize` can be used to call a function that returns a different sized
79+
output than the size of a single slice of `mtVar`. In that case, a new tensor `mtNewVar` will be generated, and it
80+
will have the size `vnSliceSize`, with the dimension `nSliceDim` having the same length as in the original tensor `mtVar`.
81+
82+
"Slice assign" operations can be performed by passing in a function than takes no input arguments for `fhFunctionHandle`.
83+
84+
For example:
85+
86+
mtVar(:) = abs(fft2(mtVar(:, :, :)));
87+
88+
is equivalent to
89+
90+
SliceFunction(mtVar, @(x)(abs(fft2(x)), 3);
91+
92+
Each slice of the third dimension of `mtVar`, taken in turn, is passed to `fft2` and the result stored back into the
93+
same slice of `mtVar`.
94+
95+
mtVar2 = SliceFunction(mtVar, @(x)fft2(x), 3);
96+
97+
This will return the result in a new, complex `MappedTensor` with temporary storage.
98+
99+
mtVar2 = SliceFunction(mtVar, @(x)sum(x), 3, [1 10 1]);
100+
101+
This will create a new `MappedTensor` with size [`1 10 N]`, where `N` is the length along dimension 3 of `mtVar`.
102+
103+
SliceFunction(mtVar, @()(randn(10, 10)), 3);
104+
105+
This will assign random numbers to each slice of `mtVar` independently.
106+
107+
SliceFunction(mtVar, @(x, n)(x .* vfFactor(n)), 3);
108+
109+
The second argument to the function is passed the index of the current slice. This line will multiply each slice in
110+
mtVar by a scalar corresponding to that slice index.
111+
112+
## Publications
113+
114+
This work was published in [Frontiers in Neuroinformatics][4]: DR Muir and BM Kampa. 2015. [_FocusStack and StimServer:
115+
A new open source MATLAB toolchain for visual stimulation and analysis of two-photon calcium neuronal imaging data_][5],
116+
**Frontiers in Neuroinformatics** 8 _85_. DOI: [10.3389/fninf.2014.00085](http://dx.doi.org/10.3389/fninf.2014.00085).
117+
Please cite our publication in lieu of thanks, if you use this code.
118+
119+
[1]: http://www.mathworks.com
120+
[2]: http://www.mathworks.com/help/techdoc/ref/memmapfile.html
121+
[3]: /resources/code/MappedTensor.zip
122+
[4]: http://www.frontiersin.org/neuroinformatics
123+
[5]: http://dx.doi.org/10.3389/fninf.2014.00085

0 commit comments

Comments
 (0)