Skip to content

Commit 50761f6

Browse files
Merge pull request #37 from yarikoptic/enh-codespell
add codespell: config, workflow and have typos fixed
2 parents 4e5835d + bffa54f commit 50761f6

13 files changed

+54
-27
lines changed

.codespellrc

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
[codespell]
2+
skip = .git,*.pdf,*.svg
3+
ignore-regex = ^\s*"image/\S+": ".*
4+
#
5+
# ignore-words-list =

.github/workflows/codespell.yml

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
---
2+
name: Codespell
3+
4+
on:
5+
push:
6+
branches: [main]
7+
pull_request:
8+
branches: [main]
9+
10+
permissions:
11+
contents: read
12+
13+
jobs:
14+
codespell:
15+
name: Check for spelling errors
16+
runs-on: ubuntu-latest
17+
18+
steps:
19+
- name: Checkout
20+
uses: actions/checkout@v3
21+
- name: Codespell
22+
uses: codespell-project/actions-codespell@v2

completed_tutorials/01-DataJoint Basics.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1071,7 +1071,7 @@
10711071
"source": [
10721072
"Thus we will need both **mouse** and a new attribute **session_date** to uniquely identify a single session. \n",
10731073
"\n",
1074-
"Remember that a **mouse** is already uniquely identified by its primary key - **mouse_id**. In DataJoint, you can declare that **session** depends on the mouse, and DataJoint will automatically include the mouse's primary key (`mouse_id`) as part of the session's primary key, along side any additional attribute(s) you specificy."
1074+
"Remember that a **mouse** is already uniquely identified by its primary key - **mouse_id**. In DataJoint, you can declare that **session** depends on the mouse, and DataJoint will automatically include the mouse's primary key (`mouse_id`) as part of the session's primary key, along side any additional attribute(s) you specify."
10751075
]
10761076
},
10771077
{

completed_tutorials/02-Calcium Imaging Imported Tables.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -732,7 +732,7 @@
732732
"source": [
733733
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
734734
"\n",
735-
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
735+
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
736736
"\n",
737737
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
738738
]

completed_tutorials/03-Calcium Imaging Computed Tables.ipynb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -770,7 +770,7 @@
770770
"cell_type": "markdown",
771771
"metadata": {},
772772
"source": [
773-
"There are multiple ways to perform the segementation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
773+
"There are multiple ways to perform the segmentation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
774774
]
775775
},
776776
{
@@ -1027,7 +1027,7 @@
10271027
"cell_type": "markdown",
10281028
"metadata": {},
10291029
"source": [
1030-
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of paremeters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
1030+
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of parameters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
10311031
]
10321032
},
10331033
{
@@ -1160,7 +1160,7 @@
11601160
"cell_type": "markdown",
11611161
"metadata": {},
11621162
"source": [
1163-
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmenation` table. "
1163+
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmentation` table. "
11641164
]
11651165
},
11661166
{
@@ -1342,7 +1342,7 @@
13421342
"cell_type": "markdown",
13431343
"metadata": {},
13441344
"source": [
1345-
"And for the part table `Segmenation.Roi`, there was an additional primary key attribute `roi_idx`:`"
1345+
"And for the part table `Segmentation.Roi`, there was an additional primary key attribute `roi_idx`:`"
13461346
]
13471347
},
13481348
{
@@ -1721,7 +1721,7 @@
17211721
}
17221722
],
17231723
"source": [
1724-
"# ENTER YOUR CODE! - populate the Segmenation table for real!\n",
1724+
"# ENTER YOUR CODE! - populate the Segmentation table for real!\n",
17251725
"Segmentation.populate()"
17261726
]
17271727
},
@@ -2177,7 +2177,7 @@
21772177
"cell_type": "markdown",
21782178
"metadata": {},
21792179
"source": [
2180-
"We can simply delete the unwanted paramter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
2180+
"We can simply delete the unwanted parameter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
21812181
]
21822182
},
21832183
{

completed_tutorials/04-Electrophysiology Imported Tables.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -523,7 +523,7 @@
523523
"cell_type": "markdown",
524524
"metadata": {},
525525
"source": [
526-
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convetion!"
526+
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convention!"
527527
]
528528
},
529529
{
@@ -977,7 +977,7 @@
977977
"cell_type": "markdown",
978978
"metadata": {},
979979
"source": [
980-
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimesion)."
980+
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimension)."
981981
]
982982
},
983983
{
@@ -1068,7 +1068,7 @@
10681068
"source": [
10691069
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
10701070
"\n",
1071-
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
1071+
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
10721072
"\n",
10731073
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
10741074
]

completed_tutorials/05-Electrophysiology Computed Tables.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3333,7 +3333,7 @@
33333333
"cell_type": "markdown",
33343334
"metadata": {},
33353335
"source": [
3336-
"We can simply delete the unwanted paramter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
3336+
"We can simply delete the unwanted parameter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
33373337
]
33383338
},
33393339
{

short_tutorials/University.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -490,7 +490,7 @@
490490
"metadata": {},
491491
"outputs": [],
492492
"source": [
493-
"# Millenials\n",
493+
"# Millennials\n",
494494
"millennials = Student & 'date_of_birth between \"1981-01-01\" and \"1996-12-31\"'"
495495
]
496496
},
@@ -519,7 +519,7 @@
519519
"metadata": {},
520520
"outputs": [],
521521
"source": [
522-
"# Millenials who have never enrolled\n",
522+
"# Millennials who have never enrolled\n",
523523
"millennials - Enroll"
524524
]
525525
},

tutorials/01-DataJoint Basics.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@
5151
"If you visit the [documentation for DataJoint](https://docs.datajoint.io/introduction/Data-pipelines.html), we define a data pipeline as follows:\n",
5252
"> A data pipeline is a sequence of steps (more generally a directed acyclic graph) with integrated storage at each step. These steps may be thought of as nodes in a graph.\n",
5353
"\n",
54-
"While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependecies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n",
54+
"While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependencies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n",
5555
"\n",
5656
"1. define these \"things\" as tables in which you can store the information about them\n",
5757
"2. define the relationships (in particular the dependencies) between the \"things\"\n",
@@ -560,7 +560,7 @@
560560
"source": [
561561
"Thus we will need both **mouse** and a new attribute **session_date** to uniquely identify a single session. \n",
562562
"\n",
563-
"Remember that a **mouse** is already uniquely identified by its primary key - **mouse_id**. In DataJoint, you can declare that **session** depends on the mouse, and DataJoint will automatically include the mouse's primary key (`mouse_id`) as part of the session's primary key, along side any additional attribute(s) you specificy."
563+
"Remember that a **mouse** is already uniquely identified by its primary key - **mouse_id**. In DataJoint, you can declare that **session** depends on the mouse, and DataJoint will automatically include the mouse's primary key (`mouse_id`) as part of the session's primary key, along side any additional attribute(s) you specify."
564564
]
565565
},
566566
{

tutorials/02-Calcium Imaging Imported Tables.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -342,7 +342,7 @@
342342
"source": [
343343
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
344344
"\n",
345-
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
345+
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
346346
"\n",
347347
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
348348
]

0 commit comments

Comments
 (0)