You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/annotations.rst
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,4 +71,4 @@ If we do not know the exact path of the *Section* we are looking for, we need to
71
71
.. literalinclude:: examples/annotations.py
72
72
:lines: 28 - 31
73
73
74
-
The result of the ``find_sections`` will always be list which may be empty if no match was found. Therefore, the call in the last line is to some extent risky and would lead to an OutOfBounds exception if the search failed.
74
+
The result of the ``find_sections`` will always be a list which may be empty if no match was found. Therefore, the call in the last line is to some extent risky and would lead to an ``OutOfBounds`` exception if the search failed.
Copy file name to clipboardExpand all lines: docs/source/data_handling.rst
+7-8Lines changed: 7 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,12 +4,12 @@
4
4
Working with data
5
5
=================
6
6
7
-
Storing data is one thing, but we want to work with it. The following examples illustrate reading of data from *DataArray*, *Tag* and *MultiTag* entities We will use the dummy dataset already used in the :doc:`tagging <./tagging>` example. The figure below shows what is stored in the dataset.
7
+
Storing data is one thing, but we want to work with it. The following examples illustrate reading of data from *DataArray*, *Tag* and *MultiTag* entities. We will use the dummy dataset already used in the :doc:`tagging <./tagging>` example. The figure below shows what is stored in the dataset.
8
8
9
9
.. figure:: ./images/tag1.png
10
10
:alt:a system's response to a stimulus
11
11
12
-
At some instance in time a system was exposed to a stimulus that leads to the system's response. The response has been recorded and stored in a *DataArray* the A *Tag* is used to highlight the "stimulus-on" segment of the data.
12
+
At some instance in time a system was exposed to a stimulus that leads to the system's response. The response has been recorded and stored in a *DataArray*, a *Tag* is used to highlight the "stimulus-on" segment of the data.
@@ -29,7 +29,7 @@ The first and maybe most common problem is to read the data stored in a
29
29
Reading all data
30
30
~~~~~~~~~~~~~~~~
31
31
32
-
In *NIX* when you open a *DataArray* the stored the data is **not** automatically read from file. This keeps the object lightweight and easy to create. To read the data you can simply access the data in a numpy style:
32
+
In *NIX* when you open a *DataArray* the stored data is **not** automatically read from file. This keeps the object lightweight and easy to create. To read the data you can simply access it in a numpy style:
An alternative approach is to use the ``DataArray.get_slice`` method which by default works with indices but can also work in data coordinates. E.g. we know that the data is 1-D and covers a span of 3.5s and we want to have the data in the interval 0.5s through 1.75s. The method returns a ``nixio.DataView`` object. The actual reading is done be accessing the data.
61
+
An alternative approach is to use the ``DataArray.get_slice`` method which by default works with indices but can also work in data coordinates. E.g. we know that the data is 1-D and covers a span of 3.5s and we want to have the data in the interval 0.5s through 1.75s. The method returns a ``nixio.DataView`` object. The actual reading is done by accessing the data.
The arguments ``positions`` and ``extents`` are passed as lists. There must be one entry for each dimension of the data. In this case, since the data is 1-D, positions and extents are 1-element lists.
69
-
Note: the slice is defined by the starting point(s) and the *extent(s)* not with start and end points.
69
+
Note: the slice is defined by the starting point(s) and the *extent(s)*, not with start and end points.
70
70
71
71
Reading tagged data
72
72
~~~~~~~~~~~~~~~~~~~
@@ -85,7 +85,7 @@ In order to read the data that belongs to the highlighted region(s) *Tag* and *M
85
85
:emphasize-lines: 10
86
86
:caption: Reading data segments tagged by the *Tag* or *MultiTag* can be done using the ``tagged_data`` method (:download:`example code <examples/multiple_regions.py>`).
87
87
88
-
The *MultiTag* version of the ``tagged_data`` method takes two arguments. The first is the index of the tagged region (0 for the first), the second argument is name (you can also use the index or the id) of the referenced *DataArray*. Since the *Tag* tags only a single region, it only takes one argument, i.e. the name (id, index) of the referenced *DataArray*.
88
+
The *MultiTag* version of the ``tagged_data`` method takes two arguments. The first is the index of the tagged region (0 for the first), the second argument is the name of the referenced *DataArray* (you can also use the index or the id). Since the *Tag* tags only a single region, it takes only one argument, i.e. the name (id, index) of the referenced *DataArray*.
89
89
90
90
.. figure:: ./images/reading_tagged_data.png
91
91
:alt:reading tagged data
@@ -98,4 +98,3 @@ obtained using the ``feature_data`` methods.
98
98
:lines: 69 - 76
99
99
:emphasize-lines: 6, 7
100
100
:caption: ``feature_data`` works analogously, the first argument is the index of the tagged region, the second the name (or id or index) of the feature. Here the feature stores a single number, i.e. the frequency of the stimulus for each tagged region which is plotted below the highlighted regions in the figure above (:download:`example code <examples/multiple_regions.py>`).
arrays = [da for da in block.data_arrays if"relacs.data.sampled.v-1"in da.type.lower()]
61
61
62
62
With this kind of list comprehensions a large variety of searches can be easily performed. The above said assumes a “data-centered” view. The other way would be to
63
-
assume a “metadata-centered” view and to scan the metadata tree and
63
+
assume a “metadata-centered” view, to scan the metadata tree and
64
64
perform an inverse search from the metadata to the data entities that
65
65
refer to the respective section. Accordingly, ``Section`` and ``Sources`` define
66
66
methods to get the referring *DataArray*\ s, *Tag*\ s, etc..
Copy file name to clipboardExpand all lines: docs/source/image_data.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ attribution in the code.
16
16
17
17
.. literalinclude:: examples/imageData.py
18
18
:lines: 59-64
19
-
:caption: Image data is just 3D data that can be easily stored in a *DataArray*. We need to add three dimension descriptors, though (to run the example you need the :download:`example code <examples/imageData.py>` , the :download:`image <examples/lenna.png>` *imagemagick* or *xv* packages).
19
+
:caption: Image data is just 3-D data that can be easily stored in a *DataArray*. We need to add three dimension descriptors, though (to run the example you need the :download:`example code <examples/imageData.py>` , the :download:`image <examples/lenna.png>` *imagemagick* or *xv* packages).
20
20
21
21
.. image:: examples/lenna.png
22
22
:alt:lenna
@@ -44,7 +44,7 @@ data. The same Tag can be applied to many references as long as
44
44
45
45
.. literalinclude:: examples/singleROI.py
46
46
:lines: 80-84
47
-
:caption: A *Tag* is used to tag a a single region of interest. Most image data is 3D with the third dimension representing the color channels (:download:`singleROI.py <examples/singleROI.py>`).
47
+
:caption: A *Tag* is used to tag a a single region of interest. Most image data is 3-D with the third dimension representing the color channels (:download:`singleROI.py <examples/singleROI.py>`).
48
48
49
49
.. image:: images/single_roi.png
50
50
:alt:single roi
@@ -64,7 +64,7 @@ For tagging multiple regions in the image data we again use a *MultiTag* entity.
64
64
:alt:many rois
65
65
:width:240
66
66
67
-
The start positions and extents of the ROIs are stored in two separate *DataArrays*, these are each 2-D the first dimension represents the number of regions, the second defines the position/extent for each single dimension of the data (height, width, color channels).
67
+
The start positions and extents of the ROIs are stored in two separate *DataArrays*, these are each 2-D, the first dimension represents the number of regions, the second defines the position/extent for each single dimension of the data (height, width, color channels).
68
68
69
69
The *MultiTag* has a ``tagged_data`` method that is used to retrieve the data tagged by the *MultiTag*.
70
70
@@ -74,4 +74,4 @@ The *MultiTag* has a ``tagged_data`` method that is used to retrieve the data ta
Copy file name to clipboardExpand all lines: docs/source/news.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ Pass ``quiet=False`` in order to get some feedback on what the tool did. **Note:
50
50
Model changes
51
51
#############
52
52
53
-
* the metadata model was simplified to reflects the changes introduced to the underlying **odml** data model. Accordingly the *Value* entity does no longer exist. New versions of the library can read but not write old data. Experience showed that almost all use cases stored single Values in a *Property*. The overhead (code and also file size) of keeping each value in a separate Enitiy is not justified. The *Property* now keeps all information that was Value related, such as the uncertainty. If you want to store multiple values in a property this is still possible but they have to have the same data type. (see :ref:`Annotations with arbitrary metadata` for more information).
53
+
* the metadata model was simplified to reflect the changes introduced to the underlying **odml** data model. Accordingly the *Value* entity does no longer exist. New versions of the library can read but not write old data. Experience showed that almost all use cases stored single Values in a *Property*. The overhead (code and also file size) of keeping each value in a separate Entitiy is not justified. The *Property* now keeps all information that was Value related, such as the uncertainty. If you want to store multiple values in a property this is still possible but they have to have the same data type. (see :ref:`Annotations with arbitrary metadata` for more information).
54
54
* New *DataFrame* entity that stores tabular data. Each column has a name, unit, and data type. (see :ref:`The DataFrame` for more information).
55
55
* *Tags* and *MultiTags* can link to DataFrames as features.
56
56
* *RangeDimensions* ticks can now be stored within the dimension descriptor, or in linked DataArrays or DataFrames. Ticks must still be one-dimensional (see :ref:`RangeDimension` for more information).
@@ -94,4 +94,4 @@ Extended command line tool abilities; The ``nixio`` command line tool now bundle
94
94
tool is under active development. Please use the github issue tracker
95
95
(https://github.com/G-node/nixpy/issues) for bug reports and feature requests.
96
96
validate Validate NIX files for missing or inconsistent objects and annotations.
97
-
upgrade Upgrade NIX files to newest file format version.
97
+
upgrade Upgrade NIX files to newest file format version.
Copy file name to clipboardExpand all lines: docs/source/sources.rst
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ subject.
23
23
As mentioned above the Sources build a tree. The block (as the root of the tree) at the moment has only a single source attached to it
24
24
25
25
.. literalinclude:: examples/sources.py
26
-
:lines:53 - 55
26
+
:lines:52 - 54
27
27
28
28
The output should yield:
29
29
@@ -39,7 +39,7 @@ Search and find
39
39
In a data-centered search we can then ask the *DataArray* for it's *Source* to get information about the cell and get the linked metadata. A *DataArray* may have several sources attached to it. To make sure we get the right one (with the cell information) we perform a search on the sources using the **type** information.
40
40
41
41
.. literalinclude:: examples/sources.py
42
-
:lines:59 - 61
42
+
:lines:58 - 60
43
43
44
44
The output should give
45
45
@@ -53,10 +53,10 @@ The output should give
53
53
|- BaselineRate: (15,)Hz
54
54
|- Layer: ('4',)
55
55
56
-
In a or source-centered search we can ask for the *DataArrays* that link to a source.
56
+
In a source-centered search we can ask for the *DataArrays* that link to a source.
Copy file name to clipboardExpand all lines: docs/source/spike_time_data.rst
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ Spike time data
5
5
6
6
Storing the times of spikes (action potentials) that are generated by neurons is one of the most common use cases.
7
7
8
-
The data shown below is simulation data created using a Leaky Integrate and Fire (LIF)model neuron. To run the example codes in this section you will need the :download:`lif model <examples/lif.py>`.
8
+
The data shown below is simulation data created using a Leaky Integrate and Fire (LIF)model neuron. To run the example codes in this section you will need the :download:`lif model <examples/lif.py>`.
9
9
10
10
.. literalinclude:: examples/spikeTagging.py
11
11
:lines: 58-78
@@ -20,7 +20,7 @@ The data shown below is simulation data created using a Leaky Integrate and Fire
20
20
Adding features
21
21
---------------
22
22
23
-
The following code shows how to use the **Features** of the NIX-model. Suppose that we have the recording of a signal in which a set of events is detected. Each event may have certain characteristics one wants to store. These are stored as **Features** of the events. There are three different link-types between the features and the events stored in the tag. *nix.LinkType.Untagged* indicates that the whole data stored in the **Feature** applies to the points defined in the tag. *nix.LinkType.Tagged* on the other side implies that the *position* and *extent* have to be applied also to the data stored in the **Feature**. Finally, the *nix.LinkType.Indexed* indicates that there is one point (or slice) in the **Feature** data that is related to each position in the Tag.
23
+
The following code shows how to use the **Features** of the NIX-model. Suppose that we have the recording of a signal in which a set of events is detected. Each event may have certain characteristics one wants to store. These are stored as **Features** of the events. There are three different link-types between the features and the events stored in the tag. *nix.LinkType.Untagged* indicates that the whole data stored in the **Feature** applies to the points defined in the tag. *nix.LinkType.Tagged* on the other side implies that the *position* and *extent* have to be applied also to the data stored in the **Feature**. Finally, the *nix.LinkType.Indexed* indicates that there is one point (or slice) in the **Feature** data that is related to each position in the tag.
24
24
25
25
The following examples show how this works.
26
26
@@ -29,7 +29,7 @@ The following examples show how this works.
29
29
Tagging stimulus segments
30
30
-------------------------
31
31
32
-
Let's say we record the neuronal activity and in a certain epoch of that recording a stimulus was presented. This time interval is annotated using a **Tag**. This inidicates the time in which the stimulus was on but we may also want to link the stimulus itself to it. The stimulus is also stored as a **DataArray** be linked to the *Tag* as an *untagged* **Feature** of it.
32
+
Let's say we record the neuronal activity and in a certain epoch of that recording a stimulus was presented. This time interval is annotated using a **Tag**. This inidicates the time in which the stimulus was on but we may also want to link the stimulus itself to it. The stimulus is also stored as a **DataArray** and can be linked to the *Tag* as an *untagged* **Feature** of it.
33
33
34
34
.. literalinclude:: examples/untaggedFeature.py
35
35
:lines: 111-122
@@ -39,7 +39,7 @@ Let's say we record the neuronal activity and in a certain epoch of that recordi
39
39
.. image:: images/untagged_feature.png
40
40
:alt:untagged feature
41
41
42
-
In the recorded membrane voltage data is 10s long and we tag the interval between ``stimulus_onset`` and ``stimulus_onset + stimulus_duration`` (from 1 to 9 seconds). The stimulus itself is only 8s long and was played in the tagged interval. We use a *Tag* to bind stimulus and recorded signal together. The data stored in the "untagged" feature is the whole stimulus. The *Tag's* position and extent do not apply to the stimulus trace.
42
+
The recorded membrane voltage data is 10s long and we tag the interval between ``stimulus_onset`` and ``stimulus_onset + stimulus_duration`` (from 1 to 9 seconds). The stimulus itself is only 8s long and was played in the tagged interval. We use a *Tag* to bind stimulus and recorded signal together. The data stored in the "untagged" feature is the whole stimulus. The *Tag's* position and extent do not apply to the stimulus trace.
43
43
44
44
.. _tagged_feature:
45
45
@@ -72,8 +72,8 @@ different flags that define how this link has to be interpreted. In this case th
72
72
position has to be used as an index in the first dimension of the Feature data. The **LinkType** has to be set to *indexed*.
73
73
74
74
.. literalinclude:: examples/spikeFeatures.py
75
-
:lines:135-147
76
-
:caption: From the stimulus, a set of snippets has been extracted and stored in 2D-DataArray. To be used as an ``Indexed`` feature it must be organized that the first dimension represents the number of snippets. The second dimension is the time. (:download:`example code <examples/spikeFeatures.py>`).
75
+
:lines:130-135
76
+
:caption: From the stimulus, a set of snippets has been extracted and stored in a 2-D DataArray. To be used as an ``Indexed`` feature it must be organized that the first dimension represents the number of snippets. The second dimension is the time. (:download:`example code <examples/spikeFeatures.py>`).
0 commit comments