Skip to content
This repository was archived by the owner on Sep 28, 2024. It is now read-only.

Commit 744df70

Browse files
committed
Updated docs and readme
1 parent 1e9b134 commit 744df70

File tree

2 files changed

+55
-1
lines changed

2 files changed

+55
-1
lines changed

README.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,8 +86,26 @@ DeepONet
8686
#tuple of Ints for branch net architecture and then for trunk net, followed by activations for branch and trunk respectively
8787
model = DeepONet((32,64,72), (24,64,72), σ, tanh)
8888
```
89+
Or specify branch and trunk as separate `Chain` from Flux and pass to `DeepONet`
90+
91+
```julia
92+
branch = Chain(Dense(32,64,σ), Dense(64,72,σ))
93+
trunk = Chain(Dense(24,64,tanh), Dense(64,72,tanh))
94+
model = DeepONet(branch,trunk)
95+
```
96+
8997
You can again specify loss, optimization and training parameters just as you would for a simple neural network with Flux.
9098

99+
```julia
100+
loss(xtrain,ytrain,sensor) = Flux.Losses.mse(model(xtrain,sensor),ytrain)
101+
evalcb() = @show(loss(xval,yval,grid))
102+
103+
learning_rate = 0.001
104+
opt = ADAM(learning_rate)
105+
parameters = params(model)
106+
Flux.@epochs 400 Flux.train!(loss, parameters, [(xtrain,ytrain,grid)], opt, cb = evalcb)
107+
```
108+
91109
## Examples
92110

93111
PDE training examples are provided in `example` folder.
@@ -96,6 +114,10 @@ PDE training examples are provided in `example` folder.
96114

97115
[Burgers' equation](example/Burgers)
98116

117+
### DeepONet implementation for solving Burgers' equation
118+
119+
[Burgers' equation](example/Burgers/Burgers_deeponet)
120+
99121
### Two-dimensional Fourier Neural Operator
100122

101123
[Double Pendulum](example/DoublePendulum)

docs/src/index.md

Lines changed: 33 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Documentation for [NeuralOperators](https://github.com/foldfelis/NeuralOperators
1010
|:----------------:|:--------------:|
1111
| ![](https://github.com/foldfelis/NeuralOperators.jl/blob/master/example/FlowOverCircle/gallery/ans.gif?raw=true) | ![](https://github.com/foldfelis/NeuralOperators.jl/blob/master/example/FlowOverCircle/gallery/inferenced.gif?raw=true) |
1212

13-
The demonstration showing above is Navier-Stokes equation learned by the `MarkovNeuralOperator` with only one time step information.
13+
The demonstration shown above is Navier-Stokes equation learned by the `MarkovNeuralOperator` with only one time step information.
1414
Example can be found in [`example/FlowOverCircle`](https://github.com/foldfelis/NeuralOperators.jl/tree/master/example/FlowOverCircle).
1515

1616
## Abstract
@@ -30,6 +30,8 @@ It performs Fourier transformation across infinite-dimensional function spaces a
3030
With only one time step information of learning, it can predict the following few steps with low loss
3131
by linking the operators into a Markov chain.
3232

33+
**DeepONet operator** (Deep Operator Network)learns a neural operator with the help of two sub-neural net structures described as the branch and the trunk network. The branch network is fed the initial conditions data, whereas the trunk is fed with the locations where the target(output) is evaluated from the corresponding initial conditions. It is important that the output size of the branch and trunk subnets is same so that a dot product can be performed between them.
34+
3335
Currently, the `FourierOperator` layer is provided in this work.
3436
As for model, there are `FourierNeuralOperator` and `MarkovNeuralOperator` provided.
3537
Please take a glance at them [here](apis.html#Models).
@@ -44,6 +46,8 @@ pkg> add NeuralOperators
4446

4547
## Usage
4648

49+
### Fourier Neural Operator
50+
4751
```julia
4852
model = Chain(
4953
# lift (d + 1)-dimensional vector field to n-dimensional vector field
@@ -78,3 +82,31 @@ loss(𝐱, 𝐲) = sum(abs2, 𝐲 .- model(𝐱)) / size(𝐱)[end]
7882
opt = Flux.Optimiser(WeightDecay(1f-4), Flux.ADAM(1f-3))
7983
Flux.@epochs 50 Flux.train!(loss, params(model), data, opt)
8084
```
85+
86+
### DeepONet
87+
88+
```julia
89+
#tuple of Ints for branch net architecture and then for trunk net, followed by activations for branch and trunk respectively
90+
model = DeepONet((32,64,72), (24,64,72), σ, tanh)
91+
```
92+
93+
Or specify branch and trunk as separate `Chain` from Flux and pass to `DeepONet`
94+
95+
```julia
96+
branch = Chain(Dense(32,64,σ), Dense(64,72,σ))
97+
trunk = Chain(Dense(24,64,tanh), Dense(64,72,tanh))
98+
model = DeepONet(branch,trunk)
99+
```
100+
101+
You can again specify loss, optimization and training parameters just as you would for a simple neural network with Flux.
102+
103+
```julia
104+
loss(xtrain,ytrain,sensor) = Flux.Losses.mse(model(xtrain,sensor),ytrain)
105+
evalcb() = @show(loss(xval,yval,grid))
106+
107+
learning_rate = 0.001
108+
opt = ADAM(learning_rate)
109+
parameters = params(model)
110+
Flux.@epochs 400 Flux.train!(loss, parameters, [(xtrain,ytrain,grid)], opt, cb = evalcb)
111+
```
112+
A more complete example using DeepONet architecture to solve Burgers can be found in the [examples]()

0 commit comments

Comments
 (0)