@@ -447,47 +447,43 @@ Interacting with Mesh Data
447447It is common to want to have the mesh communicate information to the particles
448448and vice versa. For example, in Particle-in-Cell calculations, the particles
449449deposit their charges onto the mesh, and later, the electric fields computed on
450- the mesh are interpolated back to the particles. Below, we show examples of
451- both these sorts of operations.
450+ the mesh are interpolated back to the particles.
451+
452+ To help perform these sorts of operations, we provide the :cpp: `ParticleToMesh `
453+ and :cpp: `MeshToParticles ` functions. These functions operate on an entire
454+ :cpp: `ParticleContainer ` at once, interpolating data back and forth between
455+ an input :cpp: `MultiFab `. A user-provided lambda function is passed in that
456+ specifies the kind of interpolation to perform. Any needed parallel communication
457+ (from particles that contribute some of their weight to guard cells, for example)
458+ is performed internally. Additionally, these methods support both a single-grid
459+ (the particles and the mesh use the same boxes and distribution mappings) and dual-grid
460+ (the particles and mesh have different layouts) formalism. In the latter case,
461+ the needed parallel communication is also performed internally.
462+
463+ We show examples of these types of operations below. The first snippet shows
464+ how to deposit a particle quantiy from the first real component of the particle
465+ data to the first component of a :cpp: `MultiFab ` using linear interpolation.
452466
453467.. highlight :: c++
454468
455469::
456470
457471
458- Ex.FillBoundary(gm.periodicity());
459- Ey.FillBoundary(gm.periodicity());
460- Ez.FillBoundary(gm.periodicity());
461- for (MyParIter pti(MyPC, lev); pti.isValid(); ++pti) {
462- const Box& box = pti.validbox();
472+ const auto plo = geom.ProbLoArray();
473+ const auto dxi = geom.InvCellSizeArray();
474+ amrex::ParticleToMesh(myPC, partMF, 0,
475+ [=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleTileType::ConstParticleTileDataType& ptd, int i,
476+ amrex::Array4<amrex::Real> const& rho)
477+ {
478+ auto p = ptd[i];
479+ ParticleInterpolator::Linear interp(p, plo, dxi);
463480
464- const auto& particles = pti.GetArrayOfStructs();
465- int nstride = particles.dataShape().first;
466- const long np = pti.numParticles();
467-
468- const FArrayBox& exfab = Ex[pti];
469- const FArrayBox& eyfab = Ey[pti];
470- const FArrayBox& ezfab = Ex[pti];
471-
472- interpolate_cic(particles.data(), nstride, np,
473- exfab.dataPtr(), eyfab.dataPtr(), ezfab.dataPtr(),
474- box.loVect(), box.hiVect(), plo, dx, &ng);
475- }
476-
477- Here, :fortran: `interpolate_cic ` is a Fortran subroutine that actually performs
478- the interpolation on a single box. :cpp: `Ex `, :cpp: `Ey `, and :cpp: `Ez ` are
479- MultiFabs that contain the electric field data. These MultiFabs must be defined
480- with the correct number of ghost cells to perform the desired type of
481- interpolation, and we call :cpp: `FillBoundary ` prior to the Fortran call so
482- that those ghost cells will be up-to-date.
483-
484- In this example, we have assumed that the :cpp: `ParticleContainer MyPC ` has
485- been defined on the same grids as the electric field MultiFabs, so that we use
486- the :cpp: `ParIter ` to index into the MultiFabs to get the data associated with
487- current tile. If this is not the case, then an additional copy will need to be
488- performed. However, if the particles are distributed in an extremely uneven
489- fashion, it is possible that the load balancing improvements associated with
490- the two-grid approach are worth the cost of the extra copy.
481+ interp.ParticleToMesh(p, rho, 0, 0, 1,
482+ [=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleType& part, int comp)
483+ {
484+ return part.rdata(comp);
485+ });
486+ });
491487
492488The inverse operation, in which the particles communicate data *to * the mesh,
493489is quite similar:
@@ -497,33 +493,56 @@ is quite similar:
497493::
498494
499495
500- rho.setVal(0.0, ng);
501- for (MyParIter pti(*this, lev); pti.isValid(); ++pti) {
502- const Box& box = pti.validbox();
496+ amrex::MeshToParticle(myPC, acceleration, 0,
497+ [=] AMREX_GPU_DEVICE (MyParticleContainer::ParticleType& p,
498+ amrex::Array4<const amrex::Real> const& acc)
499+ {
500+ ParticleInterpolator::Linear interp(p, plo, dxi);
501+
502+ interp.MeshToParticle(p, acc, 0, 1+AMREX_SPACEDIM, AMREX_SPACEDIM,
503+ [=] AMREX_GPU_DEVICE (amrex::Array4<const amrex::Real> const& arr,
504+ int i, int j, int k, int comp)
505+ {
506+ return arr(i, j, k, comp); // no weighting
507+ },
508+ [=] AMREX_GPU_DEVICE (MyParticleContainer::ParticleType& part,
509+ int comp, amrex::Real val)
510+ {
511+ part.rdata(comp) += ParticleReal(val);
512+ });
513+ });
514+
515+ In this case, we linearly interpolate `AMREX_SPACEDIM ` values starting from the 0th
516+ component of the input :cpp: `MultiFab ` to the particles, writing them starting at
517+ particle component 1. Note that :cpp: `ParticleInterpolator::MeshToParticle ` takes *two *
518+ lambda functions, one that generates the particle quantity to interpolate and another
519+ that shows how to update the mesh value.
520+
521+ Finally, the snippet below shows how to use this function to simply count the number
522+ of particles in each cell (i.e. to deposit using "nearest neighbor" interpolation)
503523
504- const auto& particles = pti.GetArrayOfStructs();
505- int nstride = particles.dataShape().first;
506- const long np = pti.numParticles();
507-
508- FArrayBox& rhofab = (*rho[lev])[pti];
524+ .. highlight :: c++
509525
510- deposit_cic(particles.data(), nstride, np, rhofab.dataPtr(),
511- box.loVect(), box.hiVect(), plo, dx);
512- }
526+ ::
513527
514- rho.SumBoundary(gm.periodicity());
528+ amrex::ParticleToMesh(myPC, partiMF, 0,
529+ [=] AMREX_GPU_DEVICE (const MyParticleContainer::SuperParticleType& p,
530+ amrex::Array4<int> const& count)
531+ {
532+ ParticleInterpolator::Nearest interp(p, plo, dxi);
515533
516- As before, we loop over all our particles, calling a Fortran routine that
517- deposits them on to the appropriate :cpp: ` FArrayBox rhofab `. The :cpp: ` rhofab `
518- must have enough ghost cells to cover the support of all the particles
519- associated with them. Note that we call :cpp: ` SumBoundary ` instead of
520- :cpp: ` FillBoundary ` after performing the deposition, to add up the charge in
521- the ghost cells surrounding each Fab into the corresponding valid cells.
534+ interp.ParticleToMesh(p, count, 0, 0, 1,
535+ [=] AMREX_GPU_DEVICE (const MyParticleContainer::ParticleType& /*p*/, int /*comp*/) -> int
536+ {
537+ return 1; // just count the particles per cell
538+ });
539+ });
522540
523- For a complete example of an electrostatic PIC calculation that includes static
524- mesh refinement, please see the ` Electrostatic PIC tutorial`.
541+ For more complex examples of interacting with mesh data, we refer readers to
542+ our ` Electromagnetic PIC tutorial < https://github.com/AMReX-Codes/amrex-tutorials/tree/main/ExampleCodes/Particles/ElectromagneticPIC >`_
525543
526- .. _`Electrostatic PIC tutorial` : https://amrex-codes.github.io/amrex/tutorials_html/Particles_Tutorial.html#electrostaticpic
544+ Or, for a complete example of an electrostatic PIC calculation that includes static
545+ mesh refinement, please see the `Electrostatic PIC tutorial <https://github.com/AMReX-Codes/amrex-tutorials/tree/main/ExampleCodes/Particles/ElectrostaticPIC >`_.
527546
528547.. _sec:Particles:ShortRange :
529548
0 commit comments