|
| 1 | +```@meta |
| 2 | +EditURL = "../../../tutorials/Tutorial_Chmy_MPI.jl" |
| 3 | +``` |
| 4 | + |
| 5 | +# Create an initial model setup for Chmy and run it in parallel |
| 6 | + |
| 7 | +## Aim |
| 8 | +In this tutorial, your will learn how to use [Chmy](https://github.com/PTsolvers/Chmy.jl) to perform a 2D diffusion simulation |
| 9 | +on one or multiple CPU's or GPU's. |
| 10 | +`Chmy` is a package that allows you to specify grids and fields and create finite difference simulations |
| 11 | + |
| 12 | +## 1. Load Chmy and required packages |
| 13 | + |
| 14 | +```julia |
| 15 | +using Chmy, Chmy.Architectures, Chmy.Grids, Chmy.Fields, Chmy.BoundaryConditions, Chmy.GridOperators, Chmy.KernelLaunch |
| 16 | +using KernelAbstractions |
| 17 | +using Printf |
| 18 | +using CairoMakie |
| 19 | +using GeophysicalModelGenerator |
| 20 | +``` |
| 21 | + |
| 22 | +In case you want to use GPU's, you need to sort out whether you have AMD or NVIDIA GPU's |
| 23 | +and load the package accordingly: |
| 24 | + using AMDGPU |
| 25 | + AMDGPU.allowscalar(false) |
| 26 | + using CUDA |
| 27 | + CUDA.allowscalar(false) |
| 28 | + |
| 29 | +To run this in parallel you need to load this: |
| 30 | + |
| 31 | +```julia |
| 32 | +using Chmy.Distributed |
| 33 | +using MPI |
| 34 | +MPI.Init() |
| 35 | +``` |
| 36 | + |
| 37 | +## 2. Define computational routines |
| 38 | +You need to specify compute kernel for the gradients: |
| 39 | + |
| 40 | +```julia |
| 41 | +@kernel inbounds = true function compute_q!(q, C, χ, g::StructuredGrid, O) |
| 42 | + I = @index(Global, NTuple) |
| 43 | + I = I + O |
| 44 | + q.x[I...] = -χ * ∂x(C, g, I...) |
| 45 | + q.y[I...] = -χ * ∂y(C, g, I...) |
| 46 | +end |
| 47 | +``` |
| 48 | + |
| 49 | +You need to specify compute kernel to update the concentration |
| 50 | + |
| 51 | +```julia |
| 52 | +@kernel inbounds = true function update_C!(C, q, Δt, g::StructuredGrid, O) |
| 53 | + I = @index(Global, NTuple) |
| 54 | + I = I + O |
| 55 | + C[I...] -= Δt * divg(q, g, I...) |
| 56 | +end |
| 57 | +``` |
| 58 | + |
| 59 | +And a main function is required: |
| 60 | + |
| 61 | +```julia |
| 62 | +@views function main(backend=CPU(); nxy_l=(126, 126)) |
| 63 | + arch = Arch(backend, MPI.COMM_WORLD, (0, 0)) |
| 64 | + topo = topology(arch) |
| 65 | + me = global_rank(topo) |
| 66 | + |
| 67 | + # geometry |
| 68 | + dims_l = nxy_l |
| 69 | + dims_g = dims_l .* dims(topo) |
| 70 | + grid = UniformGrid(arch; origin=(-2, -2), extent=(4, 4), dims=dims_g) |
| 71 | + launch = Launcher(arch, grid, outer_width=(16, 8)) |
| 72 | + |
| 73 | + ##@info "mpi" me grid |
| 74 | + |
| 75 | + nx, ny = dims_g |
| 76 | + # physics |
| 77 | + χ = 1.0 |
| 78 | + # numerics |
| 79 | + Δt = minimum(spacing(grid))^2 / χ / ndims(grid) / 2.1 |
| 80 | + # allocate fields |
| 81 | + C = Field(backend, grid, Center()) |
| 82 | + P = Field(backend, grid, Center(), Int32) # phases |
| 83 | + |
| 84 | + q = VectorField(backend, grid) |
| 85 | + C_v = (me==0) ? KernelAbstractions.zeros(CPU(), Float64, size(interior(C)) .* dims(topo)) : nothing |
| 86 | + |
| 87 | + # Use the `GeophysicalModelGenerator` to set the initial conditions. Note that |
| 88 | + # you have to call this for a `Phases` and a `Temp` grid, which we call `C` here. |
| 89 | + add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400)) |
| 90 | + |
| 91 | + # set BC's and updates the halo: |
| 92 | + bc!(arch, grid, C => Neumann(); exchange=C) |
| 93 | + |
| 94 | + # visualisation |
| 95 | + fig = Figure(; size=(400, 320)) |
| 96 | + ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0") |
| 97 | + plt = heatmap!(ax, centers(grid)..., interior(C) |> Array; colormap=:turbo) |
| 98 | + Colorbar(fig[1, 2], plt) |
| 99 | + # action |
| 100 | + nt = 100 |
| 101 | + for it in 1:nt |
| 102 | + (me==0) && @printf("it = %d/%d \n", it, nt) |
| 103 | + launch(arch, grid, compute_q! => (q, C, χ, grid)) |
| 104 | + launch(arch, grid, update_C! => (C, q, Δt, grid); bc=batch(grid, C => Neumann(); exchange=C)) |
| 105 | + end |
| 106 | + KernelAbstractions.synchronize(backend) |
| 107 | + gather!(arch, C_v, C) |
| 108 | + if me == 0 |
| 109 | + fig = Figure(; size=(400, 320)) |
| 110 | + ax = Axis(fig[1, 1]; aspect=DataAspect(), xlabel="x", ylabel="y", title="it = 0") |
| 111 | + plt = heatmap!(ax, C_v; colormap=:turbo) # how to get the global grid for axes? |
| 112 | + Colorbar(fig[1, 2], plt) |
| 113 | + save("out_gather_$nx.png", fig) |
| 114 | + end |
| 115 | + return |
| 116 | +end |
| 117 | +``` |
| 118 | + |
| 119 | +In the code above, the part that calls `GMG` is: |
| 120 | + |
| 121 | +```julia |
| 122 | +add_box!(P,C,grid, xlim=(-1.0,1.0), zlim=(-1.0,1.0), phase=ConstantPhase(4), T=ConstantTemp(400)) |
| 123 | +``` |
| 124 | +which works just like any of the other GMG function. |
| 125 | + |
| 126 | +## 3. Run the simulation on one CPU machine or GPU card: |
| 127 | + |
| 128 | +Running the code on the CPU is done with this: |
| 129 | + |
| 130 | +```julia |
| 131 | +n = 128 |
| 132 | +main(; nxy_l=(n, n) .- 2) |
| 133 | +``` |
| 134 | + |
| 135 | +If you instead want to run this on AMD or NVIDIA GPU's do this: |
| 136 | + |
| 137 | +```julia |
| 138 | +# main(ROCBackend(); nxy_l=(n, n) .- 2) |
| 139 | +# main(CUDABackend(); nxy_l=(n, n) .- 2) |
| 140 | +``` |
| 141 | + |
| 142 | +And we need to finalize the simulation with |
| 143 | + |
| 144 | +```julia |
| 145 | +MPI.Finalize() |
| 146 | +``` |
| 147 | + |
| 148 | +## 4. Run the simulation on an MPI-parallel machine |
| 149 | +If you want to run this on multiple cores, you will need to setup the [MPI.jl]() package, |
| 150 | +such that `mpiexecjl` is created on the command line. |
| 151 | + |
| 152 | +You can than run it with: |
| 153 | +mpiexecjl -n 4 --project=. julia Tutorial_Chmy_MPI.jl |
| 154 | + |
| 155 | +The full file can be downloaded [here](../../../tutorials/Tutorial_Chmy_MPI.jl) |
| 156 | + |
| 157 | +--- |
| 158 | + |
| 159 | +*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).* |
| 160 | + |
0 commit comments