Skip to content

Commit 779798d

Browse files
committed
add Chmy tutorial to the docs
1 parent ab0ad0b commit 779798d

File tree

3 files changed

+295
-1
lines changed

3 files changed

+295
-1
lines changed

docs/make.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,8 @@ makedocs(;
122122
"Gravity code" => "man/gravity_code.md",
123123
"Numerical model setups" => "man/geodynamic_setups.md",
124124
"LaMEM" => "man/lamem.md",
125-
"pTatin" => "man/lamem.md",
125+
"pTatin" => "man/ptatin.md",
126+
"Chmy" => "man/Tutorial_Chmy_MPI.md",
126127
"Profile Processing" => "man/profile_processing.md",
127128
"Gmsh" => "man/gmsh.md",
128129
"Movies" => "man/movies.md"

docs/src/man/Tutorial_Chmy_MPI.md

Lines changed: 160 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,160 @@
1+
```@meta
2+
EditURL = "../../../tutorials/Tutorial_Chmy_MPI.jl"
3+
```
4+
5+
# Create an initial model setup for Chmy and run it in parallel
6+
7+
## Aim
8+
In this tutorial, your will learn how to use [Chmy](https://github.com/PTsolvers/Chmy.jl) to perform a 2D diffusion simulation
9+
on one or multiple CPU's or GPU's.
10+
`Chmy` is a package that allows you to specify grids and fields and create finite difference simulations
11+
12+
## 1. Load Chmy and required packages
13+
14+
```julia
15+
using Chmy, Chmy.Architectures, Chmy.Grids, Chmy.Fields, Chmy.BoundaryConditions, Chmy.GridOperators, Chmy.KernelLaunch
16+
using KernelAbstractions
17+
using Printf
18+
using CairoMakie
19+
using GeophysicalModelGenerator
20+
```
21+
22+
In case you want to use GPU's, you need to sort out whether you have AMD or NVIDIA GPU's
23+
and load the package accordingly:
24+
using AMDGPU
25+
AMDGPU.allowscalar(false)
26+
using CUDA
27+
CUDA.allowscalar(false)
28+
29+
To run this in parallel you need to load this:
30+
31+
```julia
32+
using Chmy.Distributed
33+
using MPI
34+
MPI.Init()
35+
```
36+
37+
## 2. Define computational routines
38+
You need to specify compute kernel for the gradients:
39+
40+
```julia
41+
@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)
42+
I = @index(Global, NTuple)
43+
I = I + O
44+
q.x[I...] = -χ * ∂x(C, g, I...)
45+
q.y[I...] = -χ * ∂y(C, g, I...)
46+
end
47+
```
48+
49+
You need to specify compute kernel to update the concentration
50+
51+
```julia
52+
@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)
53+
I = @index(Global, NTuple)
54+
I = I + O
55+
C[I...] -= Δt * divg(q, g, I...)
56+
end
57+
```
58+
59+
And a main function is required:
60+
61+
```julia
62+
@views function main(backend=CPU(); nxy_l=(126, 126))
63+
arch = Arch(backend, MPI.COMM_WORLD, (0, 0))
64+
topo = topology(arch)
65+
me = global_rank(topo)
66+
67+
# geometry
68+
dims_l = nxy_l
69+
dims_g = dims_l .* dims(topo)
70+
grid = UniformGrid(arch; origin=(-2, -2), extent=(4, 4), dims=dims_g)
71+
launch = Launcher(arch, grid, outer_width=(16, 8))
72+
73+
##@info "mpi" me grid
74+
75+
nx, ny = dims_g
76+
# physics
77+
χ = 1.0
78+
# numerics
79+
Δt = minimum(spacing(grid))^2 / χ / ndims(grid) / 2.1
80+
# allocate fields
81+
C = Field(backend, grid, Center())
82+
P = Field(backend, grid, Center(), Int32) # phases
83+
84+
q = VectorField(backend, grid)
85+
C_v = (me==0) ? KernelAbstractions.zeros(CPU(), Float64, size(interior(C)) .* dims(topo)) : nothing
86+
87+
# Use the `GeophysicalModelGenerator` to set the initial conditions. Note that
88+
# you have to call this for a `Phases` and a `Temp` grid, which we call `C` here.
89+
add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400))
90+
91+
# set BC's and updates the halo:
92+
bc!(arch, grid, C => Neumann(); exchange=C)
93+
94+
# visualisation
95+
fig = Figure(; size=(400, 320))
96+
ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0")
97+
plt = heatmap!(ax, centers(grid)..., interior(C) |> Array; colormap=:turbo)
98+
Colorbar(fig[1, 2], plt)
99+
# action
100+
nt = 100
101+
for it in 1:nt
102+
(me==0) && @printf("it = %d/%d \n", it, nt)
103+
launch(arch, grid, compute_q! => (q, C, χ, grid))
104+
launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))
105+
end
106+
KernelAbstractions.synchronize(backend)
107+
gather!(arch, C_v, C)
108+
if me == 0
109+
fig = Figure(; size=(400, 320))
110+
ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0")
111+
plt = heatmap!(ax, C_v; colormap=:turbo) # how to get the global grid for axes?
112+
Colorbar(fig[1, 2], plt)
113+
save("out_gather_$nx.png", fig)
114+
end
115+
return
116+
end
117+
```
118+
119+
In the code above, the part that calls `GMG` is:
120+
121+
```julia
122+
add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400))
123+
```
124+
which works just like any of the other GMG function.
125+
126+
## 3. Run the simulation on one CPU machine or GPU card:
127+
128+
Running the code on the CPU is done with this:
129+
130+
```julia
131+
n = 128
132+
main(; nxy_l=(n, n) .- 2)
133+
```
134+
135+
If you instead want to run this on AMD or NVIDIA GPU's do this:
136+
137+
```julia
138+
# main(ROCBackend(); nxy_l=(n, n) .- 2)
139+
# main(CUDABackend(); nxy_l=(n, n) .- 2)
140+
```
141+
142+
And we need to finalize the simulation with
143+
144+
```julia
145+
MPI.Finalize()
146+
```
147+
148+
## 4. Run the simulation on an MPI-parallel machine
149+
If you want to run this on multiple cores, you will need to setup the [MPI.jl]() package,
150+
such that `mpiexecjl` is created on the command line.
151+
152+
You can than run it with:
153+
mpiexecjl -n 4 --project=. julia Tutorial_Chmy_MPI.jl
154+
155+
The full file can be downloaded [here](../../../tutorials/Tutorial_Chmy_MPI.jl)
156+
157+
---
158+
159+
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
160+

tutorials/Tutorial_Chmy_MPI.jl

Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
# # Create an initial model setup for Chmy and run it in parallel
2+
#
3+
# ## Aim
4+
# In this tutorial, your will learn how to use [Chmy](https://github.com/PTsolvers/Chmy.jl) to perform a 2D diffusion simulation
5+
# on one or multiple CPU's or GPU's.
6+
# `Chmy` is a package that allows you to specify grids and fields and create finite difference simulations
7+
#
8+
# ## 1. Load Chmy and required packages
9+
using Chmy, Chmy.Architectures, Chmy.Grids, Chmy.Fields, Chmy.BoundaryConditions, Chmy.GridOperators, Chmy.KernelLaunch
10+
using KernelAbstractions
11+
using Printf
12+
using CairoMakie
13+
using GeophysicalModelGenerator
14+
15+
# In case you want to use GPU's, you need to sort out whether you have AMD or NVIDIA GPU's
16+
# and load the package accordingly:
17+
#=
18+
using AMDGPU
19+
AMDGPU.allowscalar(false)
20+
using CUDA
21+
CUDA.allowscalar(false)
22+
=#
23+
24+
# To run this in parallel you need to load this:
25+
using Chmy.Distributed
26+
using MPI
27+
MPI.Init()
28+
29+
# ## 2. Define computational routines
30+
# You need to specify compute kernel for the gradients:
31+
@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O)
32+
I = @index(Global, NTuple)
33+
I = I + O
34+
q.x[I...] = -χ * ∂x(C, g, I...)
35+
q.y[I...] = -χ * ∂y(C, g, I...)
36+
end
37+
38+
# You need to specify compute kernel to update the concentration
39+
@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O)
40+
I = @index(Global, NTuple)
41+
I = I + O
42+
C[I...] -= Δt * divg(q, g, I...)
43+
end
44+
45+
# And a main function is required:
46+
@views function main(backend=CPU(); nxy_l=(126, 126))
47+
arch = Arch(backend, MPI.COMM_WORLD, (0, 0))
48+
topo = topology(arch)
49+
me = global_rank(topo)
50+
51+
## geometry
52+
dims_l = nxy_l
53+
dims_g = dims_l .* dims(topo)
54+
grid = UniformGrid(arch; origin=(-2, -2), extent=(4, 4), dims=dims_g)
55+
launch = Launcher(arch, grid, outer_width=(16, 8))
56+
57+
##@info "mpi" me grid
58+
59+
nx, ny = dims_g
60+
## physics
61+
χ = 1.0
62+
## numerics
63+
Δt = minimum(spacing(grid))^2 / χ / ndims(grid) / 2.1
64+
## allocate fields
65+
C = Field(backend, grid, Center())
66+
P = Field(backend, grid, Center(), Int32) # phases
67+
68+
q = VectorField(backend, grid)
69+
C_v = (me==0) ? KernelAbstractions.zeros(CPU(), Float64, size(interior(C)) .* dims(topo)) : nothing
70+
71+
## Use the `GeophysicalModelGenerator` to set the initial conditions. Note that
72+
## you have to call this for a `Phases` and a `Temp` grid, which we call `C` here.
73+
add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400))
74+
75+
## set BC's and updates the halo:
76+
bc!(arch, grid, C => Neumann(); exchange=C)
77+
78+
## visualisation
79+
fig = Figure(; size=(400, 320))
80+
ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0")
81+
plt = heatmap!(ax, centers(grid)..., interior(C) |> Array; colormap=:turbo)
82+
Colorbar(fig[1, 2], plt)
83+
## action
84+
nt = 100
85+
for it in 1:nt
86+
(me==0) && @printf("it = %d/%d \n", it, nt)
87+
launch(arch, grid, compute_q! => (q, C, χ, grid))
88+
launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C))
89+
end
90+
KernelAbstractions.synchronize(backend)
91+
gather!(arch, C_v, C)
92+
if me == 0
93+
fig = Figure(; size=(400, 320))
94+
ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0")
95+
plt = heatmap!(ax, C_v; colormap=:turbo) # how to get the global grid for axes?
96+
Colorbar(fig[1, 2], plt)
97+
save("out_gather_$nx.png", fig)
98+
end
99+
return
100+
end
101+
102+
# In the code above, the part that calls `GMG` is:
103+
104+
# ```julia
105+
# add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400))
106+
# ```
107+
# which works just like any of the other GMG function.
108+
109+
# ## 3. Run the simulation on one CPU machine or GPU card:
110+
111+
# Running the code on the CPU is done with this:
112+
n = 128
113+
main(; nxy_l=(n, n) .- 2)
114+
115+
# If you instead want to run this on AMD or NVIDIA GPU's do this:
116+
## main(ROCBackend(); nxy_l=(n, n) .- 2)
117+
## main(CUDABackend(); nxy_l=(n, n) .- 2)
118+
119+
# And we need to finalize the simulation with
120+
MPI.Finalize()
121+
122+
123+
# ## 4. Run the simulation on an MPI-parallel machine
124+
# If you want to run this on multiple cores, you will need to setup the [MPI.jl]() package,
125+
# such that `mpiexecjl` is created on the command line.
126+
#
127+
# You can than run it with:
128+
# mpiexecjl -n 4 --project=. julia Tutorial_Chmy_MPI.jl
129+
130+
# The full file can be downloaded [here](../../../tutorials/Tutorial_Chmy_MPI.jl)
131+
132+
#src Note: The markdown page is generated using:
133+
#src Literate.markdown("tutorials/Tutorial_Chmy_MPI.jl","docs/src/man",keepcomments=true, execute=false, codefence = "```julia" => "```")

0 commit comments

Comments
 (0)