You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/JOSS/paper.md
+30-17Lines changed: 30 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,12 +19,14 @@ affiliations:
19
19
index: 1
20
20
- name: Department of Mechanical Engineering, University of Washington
21
21
index: 2
22
-
date: 17 September 2020
22
+
date: 7 October 2020
23
23
bibliography: paper.bib
24
24
---
25
25
26
26
# Extra material added in by Brian
27
27
28
+
TODO: remove extra material
29
+
28
30
## What should the paper contain?
29
31
From the [JOSS submission guide](https://joss.readthedocs.io/en/latest/submitting.html#what-should-my-paper-contain):
30
32
@@ -37,7 +39,7 @@ Your paper should include:
37
39
* A clear Statement of Need that illustrates the research purpose of the software.
38
40
* A list of key references, including to other software addressing related needs.
39
41
* Mention (if applicable) a representative set of past or ongoing research projects using the software and recent scholarly publications enabled by it.
40
-
*Acknowledgement of any financial support."
42
+
*Acknowledgment of any financial support."
41
43
42
44
See also the [review checklist](https://joss.readthedocs.io/en/latest/review_checklist.html#software-paper) to get an idea of what the reviewers are looking for.
43
45
@@ -67,6 +69,7 @@ Here are some other example JOSS papers:
67
69
*[All published papers](https://joss.theoj.org/papers/published)
68
70
69
71
# Summary
72
+
TODO: write summary section
70
73
71
74
Scientists have long quantified empirical observations by developing mathematical models that characterize the observations, have some measure of interpretability, and are capable of making predictions.
72
75
Dynamical systems models in particular have been widely used to study, explain, and predict system behavior in a wide range of application areas, with examples ranging from Newton's laws of classical mechanics to the Michaelis-Menten kinetics for modeling enzyme kinetics.
@@ -82,10 +85,11 @@ SINDy poses this model discovery as a sparse regression problem, wherein relevan
82
85
Thus, SINDy models balance accuracy and efficiency, resulting in parsimonious models that avoid overfitting while remaining interpretable and generalizable.
83
86
This approach is straightforward to understand and can be readily customized using different sparse regression algorithms or library functions.
84
87
85
-
The `PySINDy` package is aimed at researchers and practitioners alike, enabling anyone with access to measurement data to engage in scientific model discovery.
86
-
The package is designed to be accessible to inexperienced practitioners, while also including options that allow more advanced users to customize it to their needs.
87
-
A number of popular SINDy variants are implemented, but `PySINDy` is also designed to enable further extensions for research and experimentation.
88
-
The package follows object-oriented design and is `scikit-learn` compatible.
88
+
The `PySensors` package can be used by both researchers looking to advance the state of the art and practitioners seeking simple sparse sensor selection methods for their applications of interest.
89
+
Simple methods and abundant examples help new users to hit the ground running.
90
+
At the same time modular classes leave flexibility for users to experiment with and plug in new sensor selection algorithms or dimensionality reduction techniques.
91
+
Users of `scikit-learn` will find `Pysensors` syntax familiar and intuitive.
92
+
The package is fully compatible with `scikit-learn` and follows object-oriented design principles.
89
93
90
94
The SINDy method has been widely applied for model identification in applications such as chemical reaction dynamics [@Hoffmann2018], nonlinear optics [@Sorokina2016oe], thermal fluids [@Loiseau2019data], plasma convection [@Dam2017pf], numerical algorithms [@Thaler2019jcp], and structural modeling [@lai2019sparse].
91
95
It has also been extended to handle more complex modeling scenarios such as partial differential equations [@Schaeffer2017prsa;@Rudy2017sciadv], systems with inputs or control [@Kaiser2018prsa], corrupt or limited data [@tran2017exact;@schaeffer2018extracting], integral formulations [@Schaeffer2017pre;@Reinbold2020pre], physical constraints [@Loiseau2017jfm], tensor representations [@Gelss2019mindy], and stochastic systems [@boninsegna2018sparse].
@@ -98,20 +102,29 @@ This also makes it straightforward for users to extend the package in a way such
98
102
99
103
100
104
# Features
101
-
The core object in the `PySINDy` package is the `SINDy` model class, which is implemented as a `scikit-learn` estimator.
102
-
This design was chosen to make the package simple to use for a wide user base, as many potential users will be familiar with `scikit-learn`.
103
-
It also expresses the `SINDy` model object at the appropriate level of abstraction so that users can embed it into more complicated pipelines in `scikit-learn`, such as tools for parameter tuning and model selection.
104
-
105
-
Applying `SINDy` involves making several modeling decisions, namely: which numerical differentiation method is used, which functions make up the feature library, and which sparse regression algorithm is applied to learn the model.
106
-
The core `SINDy` object uses a set of default options but can be easily customized using a number of common approaches implemented in `PySINDy`.
107
-
The package provides a few standard options for numerical differentiation (finite difference and smoothed finite difference), feature libraries (polynomial and Fourier libraries, as well as a class for creating custom libraries), and sparse regression techniques (sequentially thresholded least squares [@brunton2016pnas], LASSO [@10.2307/2346178], and sparse relaxed regularized regression [@zheng2018ieee]).
108
-
Users can also create their own differentiation, sparse regression, or feature library objects for further customization.
109
-
110
-
The software package includes tutorials in the form of Jupyter notebooks.
111
-
These tutorials demonstrate the usage of various features in the package and reproduce the examples from the original SINDy paper [@brunton2016pnas].
112
105
106
+
`PySensors` enables the sparse placement of sensors for two classes of problems: reconstruction and classification.
107
+
For reconstruction problems the package implements a unified `SensorSelector` class, with methods for efficiently analyzing the effects data or sensor quantity have on reconstruction performance.
108
+
Often different sensor locations impose variable costs, e.g. if measuring sea-surface temperature, it may be more expensive to place buoys/sensors in the middle of the ocean than close to shore.
109
+
These costs can be taken into account during sensor selection via a built-in cost-sensitive optimization routine [@clark2018cost].
110
+
For classification tasks, the package implements the Sparse Sensor Placement Optimization for Classification (SSPOC) algorithm [@brunton2016sspoc], allowing one to optimize sensor placement for classification accuracy.
111
+
This SSPOC implementation is fully general in the sense that it can be used in conjunction with any linear classifier.
112
+
Additionally, `PySensors` provides methods to enable straightforward exploration of the impacts of primary hyperparameters.
113
+
114
+
It is well known [@manohar2018sparse] that the basis in which one represents measurement data can have a pronounced effect on the sensors that are selected and the quality of the reconstruction.
115
+
Users can readily switch between different bases typically employed for sparse sensor selection, including PCA modes and random projections.
116
+
Because `PySensors` was built with `scikit-learn` compatibility in mind, it is easy to use cross-validation to select among possible choices of bases, basis modes, and other hyperparameters.
117
+
118
+
Finally, included with `PySensors` is a large suite of examples, implemented as Jupyter notebooks.
119
+
Some of the examples are written in a tutorial format and introduce new users to the objects, methods, and syntax of the package.
120
+
Other examples demonstrate intermediate-level concepts such as how to visualize model parameters and performance, how to combine `scikit-learn` and `PySensors` objects, selecting appropriate parameter values via cross-validation, and other best-practices.
121
+
Further notebooks use `PySensors` to solve challenging real-world problems.
122
+
The notebooks reproduce many of the examples from the papers upon which the package is based [@manohar2018sparse;@clark2018cost;@brunton2016sspoc].
123
+
To help users begin applying `PySensors` to their own datasets even faster, interactive versions of every notebook are available on Binder.
124
+
Overall, the examples will compress the learning curve of learning a new software package.
113
125
114
126
# Acknowledgments
127
+
TODO: write acknowledgments section
115
128
116
129
This project is a fork of [`sparsereg`](https://github.com/Ohjeah/sparsereg)[@markus_quade_sparsereg].
117
130
SLB acknowledges funding support from the Air Force Office of Scientific Research (AFOSR FA9550-18-1-0200) and the Army Research Office (ARO W911NF-19-1-0045).
0 commit comments