Skip to content
This repository was archived by the owner on Sep 28, 2024. It is now read-only.

Commit 1e28d83

Browse files
committed
Update readme
1 parent cc3fbe9 commit 1e28d83

File tree

1 file changed

+16
-23
lines changed

1 file changed

+16
-23
lines changed

README.md

Lines changed: 16 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,9 @@ It performs Fourier transformation across infinite-dimensional function spaces a
3636
With only one time step information of learning, it can predict the following few steps with low loss
3737
by linking the operators into a Markov chain.
3838

39-
**DeepONet operator** (Deep Operator Network)learns a neural operator with the help of two sub-neural net structures described as the branch and the trunk network. The branch network is fed the initial conditions data, whereas the trunk is fed with the locations where the target(output) is evaluated from the corresponding initial conditions. It is important that the output size of the branch and trunk subnets is same so that a dot product can be performed between them.
39+
**DeepONet operator** (Deep Operator Network) learns a neural operator with the help of two sub-neural net structures described as the branch and the trunk network.
40+
The branch network is fed the initial conditions data, whereas the trunk is fed with the locations where the target(output) is evaluated from the corresponding initial conditions.
41+
It is important that the output size of the branch and trunk subnets is same so that a dot product can be performed between them.
4042

4143
Currently, the `OperatorKernel` layer is provided in this work.
4244
As for model, there are `FourierNeuralOperator` and `MarkovNeuralOperator` provided. Please take a glance at them [here](src/model.jl).
@@ -51,10 +53,10 @@ model = Chain(
5153
# here, d == 1 and n == 64
5254
Dense(2, 64),
5355
# map each hidden representation to the next by integral kernel operator
54-
OperatorKernel(64=>64, (16, ), gelu),
55-
OperatorKernel(64=>64, (16, ), gelu),
56-
OperatorKernel(64=>64, (16, ), gelu),
57-
OperatorKernel(64=>64, (16, )),
56+
OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
57+
OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
58+
OperatorKernel(64=>64, (16, ), FourierTransform, gelu),
59+
OperatorKernel(64=>64, (16, ), FourierTransform),
5860
# project back to the scalar field of interest space
5961
Dense(64, 128, gelu),
6062
Dense(128, 1),
@@ -83,27 +85,28 @@ Flux.@epochs 50 Flux.train!(loss, params(model), data, opt)
8385
### DeepONet
8486

8587
```julia
86-
#tuple of Ints for branch net architecture and then for trunk net, followed by activations for branch and trunk respectively
87-
model = DeepONet((32,64,72), (24,64,72), σ, tanh)
88+
# tuple of Ints for branch net architecture and then for trunk net,
89+
# followed by activations for branch and trunk respectively
90+
model = DeepONet((32, 64, 72), (24, 64, 72), σ, tanh)
8891
```
8992
Or specify branch and trunk as separate `Chain` from Flux and pass to `DeepONet`
9093

9194
```julia
92-
branch = Chain(Dense(32,64,σ), Dense(64,72,σ))
93-
trunk = Chain(Dense(24,64,tanh), Dense(64,72,tanh))
94-
model = DeepONet(branch,trunk)
95+
branch = Chain(Dense(32, 64, σ), Dense(64, 72, σ))
96+
trunk = Chain(Dense(24, 64, tanh), Dense(64, 72, tanh))
97+
model = DeepONet(branch, trunk)
9598
```
9699

97100
You can again specify loss, optimization and training parameters just as you would for a simple neural network with Flux.
98101

99102
```julia
100-
loss(xtrain,ytrain,sensor) = Flux.Losses.mse(model(xtrain,sensor),ytrain)
101-
evalcb() = @show(loss(xval,yval,grid))
103+
loss(xtrain, ytrain, sensor) = Flux.Losses.mse(model(xtrain, sensor), ytrain)
104+
evalcb() = @show(loss(xval, yval, grid))
102105

103106
learning_rate = 0.001
104107
opt = ADAM(learning_rate)
105108
parameters = params(model)
106-
Flux.@epochs 400 Flux.train!(loss, parameters, [(xtrain,ytrain,grid)], opt, cb = evalcb)
109+
Flux.@epochs 400 Flux.train!(loss, parameters, [(xtrain, ytrain, grid)], opt, cb=evalcb)
107110
```
108111

109112
## Examples
@@ -130,16 +133,6 @@ PDE training examples are provided in `example` folder.
130133

131134
[Super resolution on time dependent Navier-Stokes equation](example/SuperResolution)
132135

133-
## Roadmap
134-
135-
- [x] `OperatorKernel` layer
136-
- [x] One-dimensional Burgers' equation example
137-
- [x] Two-dimensional with time Navier-Stokes equations example
138-
- [x] `MarkovNeuralOperator` model
139-
- [x] Flow over a circle prediction example
140-
- [ ] `NeuralOperator` layer
141-
- [ ] Poisson equation example
142-
143136
## References
144137

145138
- [Fourier Neural Operator for Parametric Partial Differential Equations](https://arxiv.org/abs/2010.08895)

0 commit comments

Comments
 (0)