Skip to content

Commit 2afd123

Browse files
Merge pull request #281 from ArnoStrouwen/LT
[skip ci] LanguageTool
2 parents 07a66bb + eca504c commit 2afd123

File tree

8 files changed

+124
-120
lines changed

8 files changed

+124
-120
lines changed

docs/src/faq.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ jump_prob = JumpProblem(prob, Direct(), jset)
3636
sol = solve(jump_prob, SSAStepper())
3737
```
3838

39-
If you have many jumps in tuples or vectors it is easiest to use the keyword
39+
If you have many jumps in tuples or vectors, it is easiest to use the keyword
4040
argument-based constructor:
4141
```julia
4242
cj1 = ConstantRateJump(rate1, affect1!)
@@ -65,7 +65,7 @@ jprob = JumpProblem(dprob, Direct(), maj,
6565
uses the `Xoroshiro128Star` generator from
6666
[RandomNumbers.jl](https://github.com/JuliaRandom/RandomNumbers.jl).
6767

68-
On version 1.7 and up JumpProcesses uses Julia's builtin random number generator by
68+
On version 1.7 and up, JumpProcesses uses Julia's builtin random number generator by
6969
default. On versions below 1.7 it uses `Xoroshiro128Star`.
7070

7171
## What are these aggregators and aggregations in JumpProcesses?
@@ -76,14 +76,14 @@ jump type happens at that time. These methods are examples of stochastic
7676
simulation algorithms (SSAs), also known as Gillespie methods, Doob's method, or
7777
Kinetic Monte Carlo methods. These are all names for jump (or point) processes
7878
simulation methods used across the biology, chemistry, engineering, mathematics,
79-
and physics literature. In the JumpProcesses terminology we call such methods
79+
and physics literature. In the JumpProcesses terminology, we call such methods
8080
"aggregators", and the cache structures that hold their basic data
8181
"aggregations". See [Jump Aggregators for Exact Simulation](@ref) for a list of
8282
the available SSA aggregators.
8383

8484
## How should jumps be ordered in dependency graphs?
8585
Internally, JumpProcesses SSAs (aggregators) order all `MassActionJump`s first,
86-
then all `ConstantRateJumps` and/or `VariableRateJumps`. i.e. in the example
86+
then all `ConstantRateJumps` and/or `VariableRateJumps`. i.e., in the example
8787

8888
```julia
8989
using JumpProcesses
@@ -115,12 +115,12 @@ more on dependency graphs needed for the various SSAs.
115115
Callbacks can be used with `ConstantRateJump`s, `MassActionJump`s, and
116116
`VariableRateJump`s. When solving a pure jump system with `SSAStepper`, only
117117
discrete callbacks can be used (otherwise a different time stepper is needed).
118-
When using an ODE or SDE time stepper any callback should work.
118+
When using an ODE or SDE time stepper, any callback should work.
119119

120120
*Note, when modifying `u` or `p` within a callback, you must call
121121
[`reset_aggregated_jumps!`](@ref) after making updates.* This ensures that the
122122
underlying jump simulation algorithms know to reinitialize their internal data
123-
structures. Leaving out this call will lead to incorrect behavior!
123+
structures. Omitting this call will lead to incorrect behavior!
124124

125125
A simple example that uses a `MassActionJump` and changes the parameters at a
126126
specified time in the simulation using a `DiscreteCallback` is
@@ -151,10 +151,10 @@ of `u[1]`, giving
151151

152152
## How can I access earlier solution values in callbacks?
153153
When using an ODE or SDE time-stepper that conforms to the [integrator
154-
interface](https://docs.sciml.ai/DiffEqDocs/stable/basics/integrator/) one
154+
interface](https://docs.sciml.ai/DiffEqDocs/stable/basics/integrator/), one
155155
can simply use `integrator.uprev`. For efficiency reasons, the pure jump
156156
[`SSAStepper`](@ref) integrator does not have such a field. If one needs
157-
solution components at earlier times one can save them within the callback
157+
solution components at earlier times, one can save them within the callback
158158
condition by making a functor:
159159
```julia
160160
# stores the previous value of u[2] and represents the callback functions

docs/src/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# JumpProcesses.jl: Stochastic Simulation Algorithms for Jump Processes, Jump-ODEs, and Jump-Diffusions
22
JumpProcesses.jl, formerly DiffEqJump.jl, provides methods for simulating jump
3-
(or point) processes. Across different fields of science such methods are also
3+
(or point) processes. Across different fields of science, such methods are also
44
known as stochastic simulation algorithms (SSAs), Doob's method, Gillespie
5-
methods, or Kinetic Monte Carlo methods . It also enables the incorporation of
5+
methods, or Kinetic Monte Carlo methods. It also enables the incorporation of
66
jump processes into hybrid jump-ODE and jump-SDE models, including jump
77
diffusions.
88

@@ -12,7 +12,7 @@ and one of the core solver libraries included in
1212

1313
The documentation includes
1414
- [a tutorial on simulating basic Poisson processes](@ref poisson_proc_tutorial)
15-
- [a tutorial and details on using JumpProcesses to simulate jump processes via SSAs (i.e. Gillespie methods)](@ref ssa_tutorial),
15+
- [a tutorial and details on using JumpProcesses to simulate jump processes via SSAs (i.e., Gillespie methods)](@ref ssa_tutorial),
1616
- [a tutorial on simulating jump-diffusion processes](@ref jump_diffusion_tutorial),
1717
- [a reference on the types of jumps and available simulation methods](@ref jump_problem_type),
1818
- [a reference on jump time stepping methods](@ref jump_solve)
@@ -24,14 +24,14 @@ There are two ways to install `JumpProcesses.jl`. First, users may install the m
2424
`DifferentialEquations.jl` package, which installs and wraps `OrdinaryDiffEq.jl`
2525
for solving ODEs, `StochasticDiffEq.jl` for solving SDEs, and `JumpProcesses.jl`,
2626
along with a number of other useful packages for solving models involving ODEs,
27-
SDEs and/or jump process. This single install will provide the user with all of
27+
SDEs and/or jump process. This single install will provide the user with all
2828
the facilities for developing and solving Jump problems.
2929

3030
To install the `DifferentialEquations.jl` package, refer to the following link
3131
for complete [installation
3232
details](https://docs.sciml.ai/DiffEqDocs/stable).
3333

34-
If the user wishes to separately install the `JumpProcesses.jl` library, which is a
34+
If the user wishes to install the `JumpProcesses.jl` library separately, which is a
3535
lighter dependency than `DifferentialEquations.jl`, then the following code will
3636
install `JumpProcesses.jl` using the Julia package manager:
3737
```julia

docs/src/jump_solve.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ use with exact simulation methods can be defined as `ConstantRateJump`s,
1818
τ-leaping methods should be defined as `RegularJump`s.
1919

2020
There are special algorithms available for efficiently simulating an exact, pure
21-
`JumpProblem` (i.e. a `JumpProblem` over a `DiscreteProblem`). `SSAStepper()`
21+
`JumpProblem` (i.e., a `JumpProblem` over a `DiscreteProblem`). `SSAStepper()`
2222
is an efficient streamlined integrator for time stepping such problems from
2323
individual jump to jump. This integrator is named after Stochastic Simulation
2424
Algorithms (SSAs), commonly used naming in chemistry and biology applications
2525
for the class of exact jump process simulation algorithms. In turn, we denote by
2626
"aggregators" the algorithms that `SSAStepper` calls to calculate the next jump
27-
time and to execute a jump (i.e. change the system state appropriately). All
27+
time and to execute a jump (i.e., change the system state appropriately). All
2828
JumpProcesses aggregators can be used with `ConstantRateJump`s and
2929
`MassActionJump`s, with a subset of aggregators also working with bounded
3030
`VariableRateJump`s (see [the first tutorial](@ref poisson_proc_tutorial) for
@@ -35,7 +35,7 @@ performant `FunctionMap` time-stepper can be used.
3535

3636
If there is a `RegularJump`, then inexact τ-leaping methods must be used. The
3737
current recommended method is `TauLeaping` if one needs adaptivity, events, etc.
38-
If ones only needs the most barebones fixed time-step leaping method, then
38+
If one only needs the most barebones fixed time-step leaping method, then
3939
`SimpleTauLeaping` can have performance benefits.
4040

4141
## Special Methods for Pure Jump Problems

docs/src/jump_types.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -39,19 +39,19 @@ jump events occur. These jumps can be specified as a [`ConstantRateJump`](@ref),
3939
[`MassActionJump`](@ref), or a [`VariableRateJump`](@ref).
4040

4141
Each individual type of jump that can occur is represented through (implicitly
42-
or explicitly) specifying two pieces of information; a `rate` function (i.e.
42+
or explicitly) specifying two pieces of information; a `rate` function (i.e.,
4343
intensity or propensity) for the jump and an `affect!` function for the jump.
4444
The former gives the probability per time a particular jump can occur given the
4545
current state of the system, and hence determines the time at which jumps can
46-
happen. The later specifies the instantaneous change in the state of the system
46+
happen. The latter specifies the instantaneous change in the state of the system
4747
when the jump occurs.
4848

4949
A specific jump type is a [`VariableRateJump`](@ref) if its rate function is
5050
dependent on values which may change between the occurrence of any two jump
5151
events of the process. Examples include jumps where the rate is an explicit
5252
function of time, or depends on a state variable that is modified via continuous
5353
dynamics such as an ODE or SDE. Such "general" `VariableRateJump`s can be
54-
expensive to simulate because it is necessary to take into account the (possibly
54+
expensive to simulate because it is necessary to consider the (possibly
5555
continuous) changes in the rate function when calculating the next jump time.
5656

5757
*Bounded* [`VariableRateJump`](@ref)s represent a special subset of
@@ -84,12 +84,12 @@ discrete steps through time, over which they simultaneously execute many jumps.
8484
These methods can be much faster as they do not need to simulate the realization
8585
of every individual jump event. τ-leaping methods trade accuracy for speed, and
8686
are best used when a set of jumps do not make significant changes to the
87-
processes' state and/or rates over the course of one time-step (i.e. during a
87+
processes' state and/or rates over the course of one time-step (i.e., during a
8888
leap interval). A single [`RegularJump`](@ref) is used to encode jumps for
8989
τ-leaping algorithms. While τ-leaping methods can be proven to converge in the
9090
limit that the time-step approaches zero, their accuracy can be highly dependent
9191
on the chosen time-step. As a rule of thumb, if changes to the state variable
92-
`u` during a time-step (i.e. leap interval) are "minimal" compared to size of
92+
`u` during a time-step (i.e., leap interval) are "minimal" compared to the size of
9393
the system, an τ-leaping method can often provide reasonable solution
9494
approximations.
9595

@@ -145,7 +145,7 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi
145145
``3A \overset{k}{\rightarrow} B`` the rate function would be
146146
`k*A*(A-1)*(A-2)/3!`. To *avoid* having the reaction rates rescaled (by `1/2`
147147
and `1/6` for these two examples), one can pass the `MassActionJump`
148-
constructor the optional named parameter `scale_rates = false`, i.e. use
148+
constructor the optional named parameter `scale_rates = false`, i.e., use
149149
```julia
150150
MassActionJump(reactant_stoich, net_stoich; scale_rates = false, param_idxs)
151151
```
@@ -158,7 +158,7 @@ MassActionJump(reactant_stoich, net_stoich; scale_rates = true, param_idxs=nothi
158158
net_stoich = [[1 => 1]]
159159
jump = MassActionJump(reactant_stoich, net_stoich; param_idxs=[1])
160160
```
161-
Alternatively one can create an empty vector of pairs to represent the reaction:
161+
Alternatively, one can create an empty vector of pairs to represent the reaction:
162162
```julia
163163
p = [1.]
164164
reactant_stoich = [Vector{Pair{Int,Int}}()]
@@ -222,7 +222,7 @@ Note that
222222
- It is currently only possible to simulate `VariableRateJump`s with
223223
`SSAStepper` when using systems with only bounded `VariableRateJump`s and the
224224
`Coevolve` aggregator.
225-
- When choosing a different aggregator than `Coevolve`, `SSAStepper` can not
225+
- When choosing a different aggregator than `Coevolve`, `SSAStepper` cannot
226226
currently be used, and the `JumpProblem` must be coupled to a continuous
227227
problem type such as an `ODEProblem` to handle time-stepping. The continuous
228228
time-stepper treats *all* `VariableRateJump`s as `ContinuousCallback`s, using
@@ -242,7 +242,7 @@ RegularJump(rate, c, numjumps; mark_dist = nothing)
242242
jump process
243243
- `c(du, u, p, t, counts, mark)` calculates the update given `counts` number of
244244
jumps for each jump process in the interval.
245-
- `numjumps` is the number of jump processes, i.e. the number of `rate`
245+
- `numjumps` is the number of jump processes, i.e., the number of `rate`
246246
equations and the number of `counts`.
247247
- `mark_dist` is the distribution for a mark.
248248

@@ -300,24 +300,24 @@ aggregator requires various types of dependency graphs, see the next section):
300300
aggregator uses a different internal storage format for collections of
301301
`ConstantRateJumps`.
302302
- *`DirectCR`*: The Composition-Rejection Direct method of Slepoy et al [2]. For
303-
large networks and linear chain-type networks it will often give better
303+
large networks and linear chain-type networks, it will often give better
304304
performance than `Direct`.
305305
- *`SortingDirect`*: The Sorting Direct Method of McCollum et al [3]. It will
306306
usually offer performance as good as `Direct`, and for some systems can offer
307307
substantially better performance.
308308
- *`RSSA`*: The Rejection SSA (RSSA) method of Thanh et al [4,5]. With `RSSACR`,
309-
for very large reaction networks it often offers the best performance of all
309+
for very large reaction networks, it often offers the best performance of all
310310
methods.
311311
- *`RSSACR`*: The Rejection SSA (RSSA) with Composition-Rejection method of
312-
Thanh et al [6]. With `RSSA`, for very large reaction networks it often offers
312+
Thanh et al [6]. With `RSSA`, for very large reaction networks, it often offers
313313
the best performance of all methods.
314314
- `RDirect`: A variant of Gillespie's Direct method [1] that uses rejection to
315315
sample the next reaction.
316316
- `FRM`: The Gillespie first reaction method SSA [1]. `Direct` should generally
317317
offer better performance and be preferred to `FRM`.
318318
- `FRMFW`: The Gillespie first reaction method SSA [1] with `FunctionWrappers`.
319319
- *`NRM`*: The Gibson-Bruck Next Reaction Method [7]. For some reaction network
320-
structures this may offer better performance than `Direct` (for example,
320+
structures, this may offer better performance than `Direct` (for example,
321321
large, linear chains of reactions).
322322
- *`Coevolve`*: An adaptation of the COEVOLVE algorithm of Farajtabar et al [8].
323323
Currently the only aggregator that also supports *bounded*
@@ -372,7 +372,7 @@ evolution, Journal of Machine Learning Research 18(1), 1305–1353 (2017). doi:
372372
Italicized constant rate jump aggregators above require the user to pass a
373373
dependency graph to `JumpProblem`. `Coevolve`, `DirectCR`, `NRM`, and
374374
`SortingDirect` require a jump-jump dependency graph, passed through the named
375-
parameter `dep_graph`. i.e.
375+
parameter `dep_graph`. i.e.,
376376
```julia
377377
JumpProblem(prob, DirectCR(), jump1, jump2; dep_graph = your_dependency_graph)
378378
```
@@ -388,7 +388,7 @@ when the `i`th jump occurs. Internally, all `MassActionJump`s are ordered before
388388
`ConstantRateJump`s and bounded `VariableRateJump`s. General `VariableRateJump`s
389389
are not handled by aggregators, and so not included in the jump ordering for
390390
dependency graphs. Note that the relative order between `ConstantRateJump`s and
391-
relative order between bounded `VariableRateJump`s is preserved. In this way one
391+
relative order between bounded `VariableRateJump`s is preserved. In this way, one
392392
can precalculate the jump order to manually construct dependency graphs.
393393

394394
`RSSA` and `RSSACR` require two different types of dependency graphs, passed
@@ -401,7 +401,7 @@ through the following `JumpProblem` kwargs:
401401
value, `u[i]`, altered when the jump occurs.
402402

403403
For systems generated from a [Catalyst](https://docs.sciml.ai/Catalyst/stable/)
404-
`reaction_network` these will be auto-generated. Otherwise you must explicitly
404+
`reaction_network` these will be auto-generated. Otherwise, you must explicitly
405405
construct and pass in these mappings.
406406

407407
## Recommendations for exact methods
@@ -430,7 +430,7 @@ For systems with only `ConstantRateJump`s and `MassActionJump`s,
430430
often substantially outperform the other methods.
431431

432432
For pure jump systems, time-step using `SSAStepper()` with a `DiscreteProblem`
433-
unless one has general (i.e. non-bounded) `VariableRateJump`s.
433+
unless one has general (i.e., non-bounded) `VariableRateJump`s.
434434

435435
In general, for systems with sparse dependency graphs if `Direct` is slow, one
436436
of `SortingDirect`, `RSSA` or `RSSACR` will usually offer substantially better

0 commit comments

Comments
 (0)