Conversation
There was a problem hiding this comment.
Pull request overview
Adds absorbing boundary condition support to the dynamic time-evolution path, expanding the code’s ability to model particle emission / outgoing flux behavior at the grid edges.
Changes:
- Introduces a new
abso_bcmodule implementing absorbing masks, loss accumulation, and related output/restart handling. - Extends the
dynamicnamelist withnabsorband wires in a call to the absorber logic. - Updates the build to compile/link the new module and adjusts library linkage.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.
| File | Description |
|---|---|
Code/dynamic.f90 |
Adds nabsorb input + integrates an absbc call into the dynamics flow. |
Code/abso_bc.f90 |
New module implementing absorbing boundary conditions and associated output/restart routines. |
Code/Makefile |
Adds abso_bc.o to build dependencies and modifies library link flags. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| DO nst = 1,nstloc | ||
| iqa = isospin(globalindex(nst)) | ||
| weight = nocc*wocc(nst) | ||
| DO iz=1,NZ; DO iy=1,NY; DO ix=1,NX | ||
| IF(tgridabso(ix,iy,iz)) THEN | ||
| rhoabso(ix,iy,iz,iqa) = rhoabso(ix,iy,iz,iqa) + & | ||
| weight*spherloss(ix,iy,iz)*( & | ||
| abs2(psi(ix,iy,iz,1,nst))+abs2(psi(ix,iy,iz,2,nst)) ) |
There was a problem hiding this comment.
In absbc, nst is a local state index (1..nstloc), but wocc is dimensioned over global states (1..nstmax). Using wocc(nst) will apply the wrong occupation probabilities in MPI runs; this should use the same global mapping as isospin(globalindex(nst)) (e.g., wocc(globalindex(nst))).
| DO nst=1,nstmax | ||
| IF(nstmax > 9999) STOP 'ERROR: Too many states for escmaskOrb' | ||
| IF(mpi_myproc == node(nst)) THEN | ||
| WRITE(scratchline,'(i5)') 10000+nst | ||
| READ(scratchline,'(a)') readstring | ||
| str = readstring(2:5) | ||
| OPEN(588,STATUS='unknown', FILE='escmaskOrb.'//str) | ||
| WRITE(588,'(a,f12.4,a,i5,/a,3i5/a)') & | ||
| '# distribution of escaped nucleon at time=',timea,& | ||
| ' for state=',nst, & | ||
| '#nx,ny,nz:',NX,NY,NZ, & | ||
| '# x y z loss of density' | ||
|
|
||
| DO iz=1,NZ; DO iy=1,NY; DO ix=1,NX | ||
| WRITE(588,'(3f12.4,1e17.7)') & | ||
| x(ix),y(iy),z(iz),rhoescmaskorb(ix,iy,iz,node(nst)) | ||
| END DO; END DO; END DO |
There was a problem hiding this comment.
In escmask, rhoescmaskorb is allocated as (NX,NY,NZ,nstloc) and populated with local-state index nst=1..nstloc, but the print path indexes it with node(nst) (MPI rank) while looping nst=1..nstmax. This will be out-of-bounds / wrong data for MPI (rank values are not local indices). Use the global→local mapping (e.g., localindex(nst)) when indexing rhoescmaskorb on the owning rank.
| IF(tmpi .AND. (MOD(ita,mprint)==1 .OR. MOD(ita,mrest)==0 .OR. & | ||
| MOD(ita,jescmask) == 0 .OR. ita == nta) ) THEN | ||
| ALLOCATE(rhoabso_all(NX,NY,NZ,2)) | ||
| CALL mpi_barrier (mpi_comm_world, mpi_ierror) | ||
| CALL mpi_allreduce(rhoabso,rhoabso_all,2*NX*NY*NZ, & | ||
| mpi_double_precision,mpi_sum, & | ||
| mpi_comm_world,mpi_ierror) | ||
| IF(MOD(ita,mprint)==1) CALL nescape(rhoabso_all) | ||
| CALL escmask(rhoabso_all) | ||
| ELSE | ||
| IF(MOD(ita,mprint)==1) CALL nescape(rhoabso) | ||
| CALL escmask(rhoabso) | ||
| END IF | ||
|
|
||
| ! prints wavefunction on measuring points on file | ||
|
|
||
| ! IF (ipes /= 0) CALL evalMP() ! not yet implemented | ||
|
|
||
| IF(MOD(ita,mrest)==0) THEN | ||
| IF(tmpi) THEN | ||
| CALL wrtabso(ita,rhoabso_all) | ||
| ELSE | ||
| CALL wrtabso(ita,rhoabso) | ||
| END IF |
There was a problem hiding this comment.
rhoabso_all is only allocated inside the IF(tmpi .AND. (...)) branch, but later wrtabso(ita,rhoabso_all) is called whenever MOD(ita,mrest)==0 and tmpi is true. If the earlier branch didn’t run, this passes an unallocated array and will crash. Guard the wrtabso call with ALLOCATED(rhoabso_all) or always allocate+allreduce when tmpi and mrest output is needed.
| ! calculate mean fields and external fields | ||
| IF(nabsorb > 0) CALL absbc(nabsorb,iter,nt,time) | ||
| CALL skyrme | ||
| IF(text_timedep) CALL extfld(0.D0) |
There was a problem hiding this comment.
This is currently the only call site for absbc, and it runs once during the preparation phase (before the time-stepping loop). That means the absorbing boundary condition is not applied each iteration as described, and it also mutates psi after densities have already been computed, so skyrme uses a rho that no longer matches the wavefunctions. Consider moving the absbc application into the per-iteration loop at a point where densities will be recomputed from the masked psi (or recompute densities immediately after masking).
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| IF(mpi_myproc==0) THEN | ||
| OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', & | ||
| FORM='UNFORMATTED') | ||
| WRITE(scratch2) rhoabsoprint | ||
| ENDIF | ||
| ! now if we are running with one processor, also write the wave functions | ||
| ! onto that file; otherwise open new individualized file | ||
| ! | ||
| IF(ifabsorbital==1) THEN | ||
| IF(mpi_nprocs>1) THEN | ||
| IF(mpi_myproc/=0) CLOSE(UNIT=scratch2,STATUS='KEEP') | ||
| WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a' | ||
| OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', & | ||
| FORM='UNFORMATTED') | ||
| ENDIF | ||
| ! | ||
| DO nst = 1,nstloc | ||
| WRITE(scratch2) rhoescmaskOrb(:,:,:,nst) | ||
| ENDDO | ||
| END IF | ||
| CLOSE(UNIT=scratch2,STATUS='KEEP') |
There was a problem hiding this comment.
wrtabso’s MPI file handling doesn’t mirror write_wavefunctions: only rank 0 opens wffile.a, but in the mpi_nprocs>1 branch other ranks call CLOSE(scratch2) even though they never opened it, and rank 0 reuses the same unit for a different file without an explicit close. This is likely to fail at runtime on some compilers and/or produce inconsistent output files; consider following the same open/close pattern as write_wavefunctions (close on rank 0 when switching, and only close units that were opened on that rank).
| IF(mpi_myproc==0) THEN | |
| OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', & | |
| FORM='UNFORMATTED') | |
| WRITE(scratch2) rhoabsoprint | |
| ENDIF | |
| ! now if we are running with one processor, also write the wave functions | |
| ! onto that file; otherwise open new individualized file | |
| ! | |
| IF(ifabsorbital==1) THEN | |
| IF(mpi_nprocs>1) THEN | |
| IF(mpi_myproc/=0) CLOSE(UNIT=scratch2,STATUS='KEEP') | |
| WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a' | |
| OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', & | |
| FORM='UNFORMATTED') | |
| ENDIF | |
| ! | |
| DO nst = 1,nstloc | |
| WRITE(scratch2) rhoescmaskOrb(:,:,:,nst) | |
| ENDDO | |
| END IF | |
| CLOSE(UNIT=scratch2,STATUS='KEEP') | |
| ! | |
| ! Rank 0: write accumulated absorption to the main restart file. | |
| ! | |
| IF(mpi_myproc==0) THEN | |
| OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', & | |
| FORM='UNFORMATTED') | |
| WRITE(scratch2) rhoabsoprint | |
| ! | |
| ! If requested, also write orbital-resolved information. | |
| ! For mpi_nprocs==1, keep writing into the same file. | |
| ! For mpi_nprocs>1, switch to an individualized file for rank 0. | |
| ! | |
| IF(ifabsorbital==1) THEN | |
| IF(mpi_nprocs>1) THEN | |
| CLOSE(UNIT=scratch2,STATUS='KEEP') | |
| WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a' | |
| OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', & | |
| FORM='UNFORMATTED') | |
| ENDIF | |
| DO nst = 1,nstloc | |
| WRITE(scratch2) rhoescmaskOrb(:,:,:,nst) | |
| ENDDO | |
| ENDIF | |
| CLOSE(UNIT=scratch2,STATUS='KEEP') | |
| ELSE | |
| ! | |
| ! Other ranks: only write orbital-resolved information if requested. | |
| ! They never touch the main restart file. | |
| ! | |
| IF(ifabsorbital==1) THEN | |
| WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a' | |
| OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', & | |
| FORM='UNFORMATTED') | |
| DO nst = 1,nstloc | |
| WRITE(scratch2) rhoescmaskOrb(:,:,:,nst) | |
| ENDDO | |
| CLOSE(UNIT=scratch2,STATUS='KEEP') | |
| ENDIF | |
| ENDIF |
| OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='OLD', & | ||
| FORM='UNFORMATTED') | ||
| READ(scratch2) rhoabso | ||
| ENDIF |
There was a problem hiding this comment.
readabso reads rhoabso only on mpi_myproc==0, but rhoabso is later accumulated independently on every rank and then summed with mpi_allreduce for reporting. On restart this makes the initial rhoabso inconsistent across ranks (rank 0 starts with the full accumulated field, others with zeros), which will lead to incorrect totals / double counting. For MPI restarts, either store/read per-rank absorption data (analogous to wavefunction files) or broadcast a consistent representation to all ranks (and adjust the accumulation scheme accordingly).
| ENDIF | |
| ENDIF | |
| ! | |
| ! ensure rhoabso is consistent across all MPI ranks after restart: | |
| ! broadcast the field read on rank 0 to all other ranks | |
| IF (mpi_nprocs > 1) THEN | |
| CALL MPI_BCAST(rhoabso, SIZE(rhoabso), MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, ierr) | |
| ENDIF |
| LOGICAL :: wflagabs=.true. | ||
|
|
||
| ALLOCATE(xmask(NX)) | ||
| ALLOCATE(ymask(NY)) | ||
| ALLOCATE(zmask(NZ)) | ||
|
|
||
| ! prepare mask functions in each direction separately | ||
|
|
||
| zmask = 1D0 | ||
| DO iz = 1,nabso | ||
| zmask(iz) = COS(pi*0.5D0*(nabso+1.0-iz)/nabso)**powabs | ||
| zmask(NZ+1-iz) = zmask(iz) | ||
| END DO | ||
|
|
||
| ymask = 1D0 | ||
| DO iy = 1,nabso | ||
| ymask(iy) = COS(pi*0.5D0*(nabso+1.0-iy)/nabso)**powabs | ||
| ymask(NY+1-iy) = ymask(iy) | ||
| END DO | ||
|
|
||
| xmask = 1D0 | ||
| DO ix = 1,nabso | ||
| xmask(ix) = COS(pi*0.5D0*(nabso+1.0D0-ix)/nabso)**powabs | ||
| xmask(NX+1-ix) = xmask(ix) | ||
| END DO | ||
|
|
||
| IF(mpi_myproc == 0 .AND. wflagabs) THEN | ||
| WRITE(6,'(a)') ' ZMASK:' | ||
| WRITE(6,'(1x,5(1pg12.4))') zmask | ||
| WRITE(6,'(a)') ' YMASK:' | ||
| WRITE(6,'(1x,5(1pg12.4))') ymask | ||
| WRITE(6,'(a)') ' XMASK:' | ||
| WRITE(6,'(1x,5(1pg12.4))') xmask | ||
| END IF |
There was a problem hiding this comment.
init_abso will always dump full XMASK/YMASK/ZMASK arrays to stdout because wflagabs is hardcoded to .true.. For typical grid sizes this can produce very large logs in production runs. Consider gating this output behind the existing wflag and/or an input switch (e.g., a namelist parameter) so it’s opt-in.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
@kylegodbey I've opened a new pull request, #5, to work on those changes. Once the pull request is ready, I'll request review from you. |
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
|
@aaron-philip Will you take a look at the MPI issues copilot is flagging? I think the other issues are handled at this point, but I haven't looked at the MPI implementation in a while so I'm unsure which indices are global/local. |
Move `absbc` into per-iteration loop with density recomputation
@kylegodbey Sure I can try, this will take more time though as I'm not too familiar either. I didn't modify abso_bc.f90 so my suspicion is if Copilot is right, those issues have been there for years... |
Alright, let's ignore those for now and make an issue regarding a careful refactor of the mpi logic for absorbing BCs. It's true that mpi would rarely be used with absorbing BCs since your particles would just evaporate if you did a crust simulation |
This pull request introduces support for absorbing boundary conditions in the dynamic calculation module. This expands the multipole response capabilities.
The main changes include adding a new module (
abso_bc), updating the build system to include the new source file, and extending the input parameters and logic indynamic.f90to utilize the absorbing boundary condition if specified.Support for absorbing boundary conditions:
abso_bcmodule todynamic.f90and updated the object dependencies in theMakefileto include the newabso_bc.ofile. (Code/Makefile,Code/dynamic.f90) [1] [2] [3]nabsorbparameter to thedynamicnamelist, allowing users to specify the number of absorbing points via input. The value is read and displayed in the input summary. (Code/dynamic.f90) [1] [2]absbc) whennabsorb > 0, applying the boundary at each iteration. (Code/dynamic.f90)Build system updates:
Makefileto ensure correct compilation and linkage of the newabso_bc.omodule. (Code/Makefile)-lscalapackto theLIBS_SKYvariable for sequential and OpenMP builds to ensure proper linking with the necessary libraries. (Code/Makefile)