Skip to content

Commit da8b680

Browse files
Remove trailing spaces
1 parent ea963da commit da8b680

File tree

11 files changed

+36
-36
lines changed

11 files changed

+36
-36
lines changed

datajoint/declare.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -304,7 +304,7 @@ def declare(full_table_name, definition, context):
304304
name=table_name, max_length=MAX_TABLE_NAME_LENGTH
305305
)
306306
)
307-
307+
308308
(
309309
table_comment,
310310
primary_key,

docs/src/concepts/data-model.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ columns (often called attributes).
5454
A collection of base relations with their attributes, domain constraints, uniqueness
5555
constraints, and referential constraints is called a schema.
5656

57-
**Domain constraints:**
57+
**Domain constraints:**
5858
Each attribute (column) in a table is associated with a specific attribute domain (or
5959
datatype, a set of possible values), ensuring that the data entered is valid.
6060
Attribute domains may not include relations, which keeps the data model
@@ -68,13 +68,13 @@ columns (often called attributes).
6868
One key in a relation is designated as the primary key used for referencing its elements.
6969

7070
**Referential constraints:**
71-
Associations among data are established by means of referential constraints with the
71+
Associations among data are established by means of referential constraints with the
7272
help of foreign keys.
7373
A referential constraint on relation A referencing relation B allows only those
7474
entities in A whose foreign key attributes match the key attributes of an entity in B.
7575

7676
**Declarative queries:**
77-
Data queries are formulated through declarative, as opposed to imperative,
77+
Data queries are formulated through declarative, as opposed to imperative,
7878
specifications of sought results.
7979
This means that query expressions convey the logic for the result rather than the
8080
procedure for obtaining it.
@@ -106,7 +106,7 @@ clarity, efficiency, workflow management, and precise and flexible data
106106
queries. By enforcing entity normalization,
107107
simplifying dependency declarations, offering a rich query algebra, and visualizing
108108
relationships through schema diagrams, DataJoint makes relational database programming
109-
more intuitive and robust for complex data pipelines.
109+
more intuitive and robust for complex data pipelines.
110110

111111
The model has emerged over a decade of continuous development of complex data
112112
pipelines for neuroscience experiments ([Yatsenko et al.,
@@ -123,7 +123,7 @@ DataJoint comprises:
123123
+ a schema [definition](../design/tables/declare.md) language
124124
+ a data [manipulation](../manipulation/index.md) language
125125
+ a data [query](../query/principles.md) language
126-
+ a [diagramming](../design/diagrams.md) notation for visualizing relationships between
126+
+ a [diagramming](../design/diagrams.md) notation for visualizing relationships between
127127
modeled entities
128128

129129
The key refinement of DataJoint over other relational data models and their

docs/src/concepts/teamwork.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -60,33 +60,33 @@ division of labor among team members, leading to greater efficiency and better s
6060
### Scientists
6161

6262
Design and conduct experiments, collecting data.
63-
They interact with the data pipeline through graphical user interfaces designed by
63+
They interact with the data pipeline through graphical user interfaces designed by
6464
others.
6565
They understand what analysis is used to test their hypotheses.
6666

6767
### Data scientists
6868

69-
Have the domain expertise and select and implement the processing and analysis
69+
Have the domain expertise and select and implement the processing and analysis
7070
methods for experimental data.
71-
Data scientists are in charge of defining and managing the data pipeline using
72-
DataJoint's data model, but they may not know the details of the underlying
71+
Data scientists are in charge of defining and managing the data pipeline using
72+
DataJoint's data model, but they may not know the details of the underlying
7373
architecture.
74-
They interact with the pipeline using client programming interfaces directly from
74+
They interact with the pipeline using client programming interfaces directly from
7575
languages such as MATLAB and Python.
7676

77-
The bulk of this manual is written for working data scientists, except for System
77+
The bulk of this manual is written for working data scientists, except for System
7878
Administration.
7979

8080
### Data engineers
8181

8282
Work with the data scientists to support the data pipeline.
83-
They rely on their understanding of the DataJoint data model to configure and
84-
administer the required IT resources such as database servers, data storage
83+
They rely on their understanding of the DataJoint data model to configure and
84+
administer the required IT resources such as database servers, data storage
8585
servers, networks, cloud instances, [Globus](https://globus.org) endpoints, etc.
86-
Data engineers can provide general solutions such as web hosting, data publishing,
86+
Data engineers can provide general solutions such as web hosting, data publishing,
8787
interfaces, exports and imports.
8888

89-
The System Administration section of this tutorial contains materials helpful in
89+
The System Administration section of this tutorial contains materials helpful in
9090
accomplishing these tasks.
9191

9292
DataJoint is designed to delineate a clean boundary between **data science** and **data

docs/src/design/integrity.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Data Integrity
22

3-
The term **data integrity** describes guarantees made by the data management process
4-
that prevent errors and corruption in data due to technical failures and human errors
3+
The term **data integrity** describes guarantees made by the data management process
4+
that prevent errors and corruption in data due to technical failures and human errors
55
arising in the course of continuous use by multiple agents.
66
DataJoint pipelines respect the following forms of data integrity: **entity
77
integrity**, **referential integrity**, and **group integrity** as described in more

docs/src/design/tables/customtype.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,9 @@ attribute type in a datajoint table class:
4949
import datajoint as dj
5050

5151
class GraphAdapter(dj.AttributeAdapter):
52-
52+
5353
attribute_type = 'longblob' # this is how the attribute will be declared
54-
54+
5555
def put(self, obj):
5656
# convert the nx.Graph object into an edge list
5757
assert isinstance(obj, nx.Graph)
@@ -60,7 +60,7 @@ class GraphAdapter(dj.AttributeAdapter):
6060
def get(self, value):
6161
# convert edge list back into an nx.Graph
6262
return nx.Graph(value)
63-
63+
6464

6565
# instantiate for use as a datajoint type
6666
graph = GraphAdapter()
@@ -75,6 +75,6 @@ class Connectivity(dj.Manual):
7575
definition = """
7676
conn_id : int
7777
---
78-
conn_graph = null : <graph> # a networkx.Graph object
78+
conn_graph = null : <graph> # a networkx.Graph object
7979
"""
8080
```

docs/src/design/tables/indexes.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Let’s now imagine that rats in a lab are identified by the combination of `lab
6262
@schema
6363
class Rat(dj.Manual):
6464
definition = """
65-
lab_name : char(16)
65+
lab_name : char(16)
6666
rat_id : int unsigned # lab-specific ID
6767
---
6868
date_of_birth = null : date
@@ -86,7 +86,7 @@ To speed up searches by the `rat_id` and `date_of_birth`, we can explicit indexe
8686
@schema
8787
class Rat2(dj.Manual):
8888
definition = """
89-
lab_name : char(16)
89+
lab_name : char(16)
9090
rat_id : int unsigned # lab-specific ID
9191
---
9292
date_of_birth = null : date

docs/src/faq.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ It is common to enter data during experiments using a graphical user interface.
77
1. The [DataJoint platform](https://works.datajoint.com) platform is a web-based,
88
end-to-end platform to host and execute data pipelines.
99

10-
2. [DataJoint LabBook](https://github.com/datajoint/datajoint-labbook) is an open
10+
2. [DataJoint LabBook](https://github.com/datajoint/datajoint-labbook) is an open
1111
source project for data entry but is no longer actively maintained.
1212

1313
## Does DataJoint support other programming languages?
1414

1515
DataJoint [Python](https://docs.datajoint.com/core/datajoint-python/) is the most
16-
up-to-date version and all future development will focus on the Python API. The
16+
up-to-date version and all future development will focus on the Python API. The
1717
[Matlab](https://datajoint.com/docs/core/datajoint-matlab/) API was actively developed
1818
through 2023. Previous projects implemented some DataJoint features in
1919
[Julia](https://github.com/BrainCOGS/neuronex_workshop_2018/tree/julia/julia) and
@@ -93,16 +93,16 @@ The entry of metadata can be manual, or it can be an automated part of data acqu
9393
into the database).
9494

9595
Depending on their size and contents, raw data files can be stored in a number of ways.
96-
In the simplest and most common scenario, raw data continues to be stored in either a
96+
In the simplest and most common scenario, raw data continues to be stored in either a
9797
local filesystem or in the cloud as collections of files and folders.
9898
The paths to these files are entered in the database (again, either manually or by
9999
automated processes).
100100
This is the point at which the notion of a **data pipeline** begins.
101101
Below these "manual tables" that contain metadata and file paths are a series of tables
102102
that load raw data from these files, process it in some way, and insert derived or
103103
summarized data directly into the database.
104-
For example, in an imaging application, the very large raw `.TIFF` stacks would reside on
105-
the filesystem, but the extracted fluorescent trace timeseries for each cell in the
104+
For example, in an imaging application, the very large raw `.TIFF` stacks would reside on
105+
the filesystem, but the extracted fluorescent trace timeseries for each cell in the
106106
image would be stored as a numerical array directly in the database.
107107
Or the raw video used for animal tracking might be stored in a standard video format on
108108
the filesystem, but the computed X/Y positions of the animal would be stored in the
@@ -164,7 +164,7 @@ This brings us to the final important question:
164164

165165
## How do I get my data out?
166166

167-
This is the fun part. See [queries](query/operators.md) for details of the DataJoint
167+
This is the fun part. See [queries](query/operators.md) for details of the DataJoint
168168
query language directly from Python.
169169

170170
## Interfaces

docs/src/internal/transpilation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ The input object is treated as a subquery in the following cases:
5959
1. A restriction is applied that uses alias attributes in the heading.
6060
2. A projection uses an alias attribute to create a new alias attribute.
6161
3. A join is performed on an alias attribute.
62-
4. An Aggregation is used a restriction.
62+
4. An Aggregation is used a restriction.
6363

6464
An error arises if
6565

docs/src/manipulation/transactions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ interrupting the sequence of such operations halfway would leave the data in an
66
state.
77
While the sequence is in progress, other processes accessing the database will not see
88
the partial results until the transaction is complete.
9-
The sequence may include [data queries](../query/principles.md) and
9+
The sequence may include [data queries](../query/principles.md) and
1010
[manipulations](index.md).
1111

1212
In such cases, the sequence of operations may be enclosed in a transaction.

docs/src/publish-data.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ The code and the data can be found at [https://github.com/sinzlab/Sinz2018_NIPS]
2727

2828
## Exporting into a collection of files
2929

30-
Another option for publishing and archiving data is to export the data from the
30+
Another option for publishing and archiving data is to export the data from the
3131
DataJoint pipeline into a collection of files.
32-
DataJoint provides features for exporting and importing sections of the pipeline.
33-
Several ongoing projects are implementing the capability to export from DataJoint
32+
DataJoint provides features for exporting and importing sections of the pipeline.
33+
Several ongoing projects are implementing the capability to export from DataJoint
3434
pipelines into [Neurodata Without Borders](https://www.nwb.org/) files.

0 commit comments

Comments
 (0)