Skip to content

Commit 378803c

Browse files
Merge pull request #232 from SciML/diffeqflux
Remove DiffEqFlux from the doc build
2 parents 331164e + f449ef0 commit 378803c

File tree

7 files changed

+45
-22
lines changed

7 files changed

+45
-22
lines changed

.buildkite/documentation.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ steps:
2929
DATADEPS_ALWAYS_ACCEPT: true
3030
JULIA_DEBUG: "Documenter"
3131
if: build.message !~ /\[skip docs\]/ && !build.pull_request.draft
32-
timeout_in_minutes: 1000
32+
timeout_in_minutes: 2000
3333

3434
env:
3535
JULIA_PKG_SERVER: "" # it often struggles with our large artifacts

docs/Project.toml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@ ComponentArrays = "b0b7db55-cfe3-40fc-9ded-d10e2dbeff66"
77
DataDrivenDiffEq = "2445eb08-9709-466a-b3fc-47e12bd697a2"
88
DataDrivenSparse = "5b588203-7d8b-4fab-a537-c31a7f73f46b"
99
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
10-
DiffEqFlux = "aae7a2af-3d4f-5e19-a356-7da93b79d9d0"
1110
DiffEqGPU = "071ae1c0-96b5-11e9-1965-c90190d839ea"
1211
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
1312
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
@@ -22,6 +21,7 @@ LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
2221
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
2322
LinearSolve = "7ed4a6bd-45f5-4d41-b270-4a48e9bafcae"
2423
Lux = "b2108857-7c20-44ae-9111-449ecde12c47"
24+
LuxCUDA = "d0bbae9a-e099-4d5b-a835-1c6931763bda"
2525
MCMCChains = "c7f686f2-ff18-58e9-bc7b-31028e88f75d"
2626
Measurements = "eff96d63-e80a-5855-80a2-b1b0885c5ab7"
2727
MethodOfLines = "94925ecb-adb7-4558-8ed8-f975c56a0bf4"
@@ -58,7 +58,6 @@ ComponentArrays = "0.15"
5858
DataDrivenDiffEq = "1.4"
5959
DataDrivenSparse = "0.1"
6060
DataFrames = "1"
61-
DiffEqFlux = "3"
6261
DiffEqGPU = "3"
6362
DifferentialEquations = "7"
6463
Distributions = "0.25"

docs/make.jl

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,12 @@ makedocs(sitename = "Overview of Julia's SciML",
2222
modules = Module[],
2323
clean = true, doctest = false, linkcheck = true,
2424
linkcheck_ignore = ["https://twitter.com/ChrisRackauckas/status/1477274812460449793",
25-
"https://epubs.siam.org/doi/10.1137/0903023"],
25+
"https://epubs.siam.org/doi/10.1137/0903023",
26+
"https://bkamins.github.io/julialang/2020/12/24/minilanguage.html",
27+
"https://arxiv.org/abs/2109.06786",
28+
"https://arxiv.org/abs/2001.04385",
29+
30+
],
2631
format = Documenter.HTML(assets = ["assets/favicon.ico"],
2732
canonical = "https://docs.sciml.ai/stable/",
2833
mathengine = mathengine),

docs/src/getting_started/find_root.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ prob2 = NonlinearProblem(ns, [x => 2.0, σ => 4.0])
152152
### Step 4: Solve the Numerical Problem
153153

154154
Now we solve the nonlinear system. For this, we choose a solver from the
155-
[NonlinearSolve.jl's solver options.](https://docs.sciml.ai/NonlinearSolve/stable/solvers/NonlinearSystemSolvers/)
155+
[NonlinearSolve.jl's solver options.](https://docs.sciml.ai/NonlinearSolve/stable/solvers/nonlinear_system_solvers/)
156156
We will choose `NewtonRaphson` as follows:
157157

158158
```@example first_rootfind
@@ -171,7 +171,7 @@ typeof(sol)
171171

172172
From this, we can see that it is an `NonlinearSolution`. We can see the documentation for
173173
how to use the `NonlinearSolution` by checking the
174-
[NonlinearSolve.jl solution type page.](https://docs.sciml.ai/NonlinearSolve/stable/basics/NonlinearSolution/)
174+
[NonlinearSolve.jl solution type page.](https://docs.sciml.ai/NonlinearSolve/stable/basics/nonlinear_solution/)
175175
For example, the solution is stored as `.u`.
176176
What is the solution to our nonlinear system, and what is the final residual value?
177177
We can check it as follows:

docs/src/highlevels/learning_resources.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,8 @@ classic SIR epidemic model.
3333
## Other Books Featuring SciML
3434

3535
- [Nonlinear Dynamics: A Concise Introduction Interlaced with Code](https://link.springer.com/book/10.1007/978-3-030-91032-7)
36-
3736
- [Numerical Methods for Scientific Computing: The Definitive Manual for Math Geeks](https://www.equalsharepress.com/)
38-
- [Fundamentals of Numerical Computation](https://tobydriscoll.net/project/fnc/)
37+
- [Fundamentals of Numerical Computation](https://tobydriscoll.net/fnc-julia/frontmatter.html)
3938
- [Statistics with Julia](https://statisticswithjulia.org/)
4039
- [Statistical Rethinking with Julia](https://shmuma.github.io/rethinking-2ed-julia/)
4140
- [The Koopman Operator in Systems and Control](https://www.springer.com/gp/book/9783030357122)

docs/src/showcase/bayesian_neural_ode.md

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ For this example, we will need the following libraries:
1616

1717
```@example bnode
1818
# SciML Libraries
19-
using DiffEqFlux, DifferentialEquations
19+
using SciMLSensitivity, DifferentialEquations
2020
2121
# ML Tools
2222
using Lux, Zygote
@@ -56,21 +56,29 @@ complicated architecture can take a huge computational time without increasing p
5656
dudt2 = Lux.Chain(x -> x .^ 3,
5757
Lux.Dense(2, 50, tanh),
5858
Lux.Dense(50, 2))
59-
prob_neuralode = NeuralODE(dudt2, tspan, Tsit5(), saveat = tsteps)
59+
6060
rng = Random.default_rng()
6161
p, st = Lux.setup(rng, dudt2)
62+
const _st = st
63+
function neuralodefunc(u, p, t)
64+
dudt2(u, p, _st)[1]
65+
end
66+
function prob_neuralode(u0, p)
67+
prob = ODEProblem(neuralodefunc, u0, tspan, p)
68+
sol = solve(prob, Tsit5(), saveat = tsteps)
69+
end
6270
p = ComponentArray{Float64}(p)
6371
const _p = p
6472
```
6573

66-
Note that the `f64` is required to put the Flux neural network into Float64 precision.
74+
Note that the `f64` is required to put the Lux neural network into Float64 precision.
6775

6876
## Step 3: Define the loss function for the Neural ODE.
6977

7078
```@example bnode
7179
function predict_neuralode(p)
7280
p = p isa ComponentArray ? p : convert(typeof(_p),p)
73-
Array(prob_neuralode(u0, p, st)[1])
81+
Array(prob_neuralode(u0, p))
7482
end
7583
function loss_neuralode(p)
7684
pred = predict_neuralode(p)

docs/src/showcase/pinngpu.md

Lines changed: 22 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,22 @@ neural network `NN` satisfies the PDE equations and is thus the solution to the
1919
our packages look like:
2020

2121
```@example pinn
22+
# High Level Interface
2223
using NeuralPDE
23-
using Optimization, OptimizationOptimisers
2424
import ModelingToolkit: Interval
25-
using Plots, Printf, Lux, CUDA, ComponentArrays, Random
25+
26+
# Optimization Libraries
27+
using Optimization, OptimizationOptimisers
28+
29+
# Machine Learning Libraries and Helpers
30+
using Lux, LuxCUDA, ComponentArrays
31+
const gpud = gpu_device() # allocate a GPU device
32+
33+
# Standard Libraries
34+
using Printf, Random
35+
36+
# Plotting
37+
using Plots
2638
```
2739

2840
## Problem Setup
@@ -91,15 +103,15 @@ domains = [t ∈ Interval(t_min, t_max),
91103
```
92104

93105
!!! note
94-
106+
95107
We used the wildcard form of the variable definition `@variables u(..)` which then
96108
requires that we always specify what the dependent variables of `u` are. This is because in the boundary conditions we change from using `u(t,x,y)` to
97109
more specific points and lines, like `u(t,x_max,y)`.
98110

99111
## Step 3: Define the Lux Neural Network
100112

101-
Now let's define the neural network that will act as our solution. We will use a simple
102-
multi-layer perceptron, like:
113+
Now let's define the neural network that will act as our solution.
114+
We will use a simple multi-layer perceptron, like:
103115

104116
```@example pinn
105117
using Lux
@@ -110,17 +122,17 @@ chain = Chain(Dense(3, inner, Lux.σ),
110122
Dense(inner, inner, Lux.σ),
111123
Dense(inner, 1))
112124
ps = Lux.setup(Random.default_rng(), chain)[1]
113-
ps = ps |> ComponentArray
114125
```
115126

116127
## Step 4: Place it on the GPU.
117128

118129
Just plop it on that sucker. We must ensure that our initial parameters for the neural
119130
network are on the GPU. If that is done, then the internal computations will all take place
120-
on the GPU. This is done by using the `gpu` function on the initial parameters, like:
131+
on the GPU. This is done by using the `gpud` function (i.e. the GPU
132+
device we created at the start) on the initial parameters, like:
121133

122134
```@example pinn
123-
ps = ps |> gpu .|> Float64
135+
ps = ps |> ComponentArray |> gpud .|> Float64
124136
```
125137

126138
## Step 5: Discretize the PDE via a PINN Training Strategy
@@ -160,15 +172,15 @@ Finally, we inspect the solution:
160172
phi = discretization.phi
161173
ts, xs, ys = [infimum(d.domain):0.1:supremum(d.domain) for d in domains]
162174
u_real = [analytic_sol_func(t, x, y) for t in ts for x in xs for y in ys]
163-
u_predict = [first(Array(phi(gpu([t, x, y]), res.u))) for t in ts for x in xs for y in ys]
175+
u_predict = [first(Array(phi([t, x, y], res.u))) for t in ts for x in xs for y in ys]
164176

165177
function plot_(res)
166178
# Animate
167179
anim = @animate for (i, t) in enumerate(0:0.05:t_max)
168180
@info "Animating frame $i..."
169181
u_real = reshape([analytic_sol_func(t, x, y) for x in xs for y in ys],
170182
(length(xs), length(ys)))
171-
u_predict = reshape([Array(phi(gpu([t, x, y]), res.u))[1] for x in xs for y in ys],
183+
u_predict = reshape([Array(phi([t, x, y], res.u))[1] for x in xs for y in ys],
172184
length(xs), length(ys))
173185
u_error = abs.(u_predict .- u_real)
174186
title = @sprintf("predict, t = %.3f", t)

0 commit comments

Comments
 (0)