Skip to content

Implement absorbing conditions#4

Open
kylegodbey wants to merge 9 commits intomainfrom
dev
Open

Implement absorbing conditions#4
kylegodbey wants to merge 9 commits intomainfrom
dev

Conversation

@kylegodbey
Copy link
Member

This pull request introduces support for absorbing boundary conditions in the dynamic calculation module. This expands the multipole response capabilities.

The main changes include adding a new module (abso_bc), updating the build system to include the new source file, and extending the input parameters and logic in dynamic.f90 to utilize the absorbing boundary condition if specified.

Support for absorbing boundary conditions:

  • Added the abso_bc module to dynamic.f90 and updated the object dependencies in the Makefile to include the new abso_bc.o file. (Code/Makefile, Code/dynamic.f90) [1] [2] [3]
  • Introduced the nabsorb parameter to the dynamic namelist, allowing users to specify the number of absorbing points via input. The value is read and displayed in the input summary. (Code/dynamic.f90) [1] [2]
  • Modified the main time evolution routine to call the absorbing boundary condition subroutine (absbc) when nabsorb > 0, applying the boundary at each iteration. (Code/dynamic.f90)

Build system updates:

  • Updated the static and dynamic object dependencies in the Makefile to ensure correct compilation and linkage of the new abso_bc.o module. (Code/Makefile)
  • Added -lscalapack to the LIBS_SKY variable for sequential and OpenMP builds to ensure proper linking with the necessary libraries. (Code/Makefile)

Copilot AI review requested due to automatic review settings March 11, 2026 19:40
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds absorbing boundary condition support to the dynamic time-evolution path, expanding the code’s ability to model particle emission / outgoing flux behavior at the grid edges.

Changes:

  • Introduces a new abso_bc module implementing absorbing masks, loss accumulation, and related output/restart handling.
  • Extends the dynamic namelist with nabsorb and wires in a call to the absorber logic.
  • Updates the build to compile/link the new module and adjusts library linkage.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 9 comments.

File Description
Code/dynamic.f90 Adds nabsorb input + integrates an absbc call into the dynamics flow.
Code/abso_bc.f90 New module implementing absorbing boundary conditions and associated output/restart routines.
Code/Makefile Adds abso_bc.o to build dependencies and modifies library link flags.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +95 to +102
DO nst = 1,nstloc
iqa = isospin(globalindex(nst))
weight = nocc*wocc(nst)
DO iz=1,NZ; DO iy=1,NY; DO ix=1,NX
IF(tgridabso(ix,iy,iz)) THEN
rhoabso(ix,iy,iz,iqa) = rhoabso(ix,iy,iz,iqa) + &
weight*spherloss(ix,iy,iz)*( &
abs2(psi(ix,iy,iz,1,nst))+abs2(psi(ix,iy,iz,2,nst)) )
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In absbc, nst is a local state index (1..nstloc), but wocc is dimensioned over global states (1..nstmax). Using wocc(nst) will apply the wrong occupation probabilities in MPI runs; this should use the same global mapping as isospin(globalindex(nst)) (e.g., wocc(globalindex(nst))).

Copilot uses AI. Check for mistakes.
Comment on lines +339 to +355
DO nst=1,nstmax
IF(nstmax > 9999) STOP 'ERROR: Too many states for escmaskOrb'
IF(mpi_myproc == node(nst)) THEN
WRITE(scratchline,'(i5)') 10000+nst
READ(scratchline,'(a)') readstring
str = readstring(2:5)
OPEN(588,STATUS='unknown', FILE='escmaskOrb.'//str)
WRITE(588,'(a,f12.4,a,i5,/a,3i5/a)') &
'# distribution of escaped nucleon at time=',timea,&
' for state=',nst, &
'#nx,ny,nz:',NX,NY,NZ, &
'# x y z loss of density'

DO iz=1,NZ; DO iy=1,NY; DO ix=1,NX
WRITE(588,'(3f12.4,1e17.7)') &
x(ix),y(iy),z(iz),rhoescmaskorb(ix,iy,iz,node(nst))
END DO; END DO; END DO
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In escmask, rhoescmaskorb is allocated as (NX,NY,NZ,nstloc) and populated with local-state index nst=1..nstloc, but the print path indexes it with node(nst) (MPI rank) while looping nst=1..nstmax. This will be out-of-bounds / wrong data for MPI (rank values are not local indices). Use the global→local mapping (e.g., localindex(nst)) when indexing rhoescmaskorb on the owning rank.

Copilot uses AI. Check for mistakes.
Comment on lines +125 to +148
IF(tmpi .AND. (MOD(ita,mprint)==1 .OR. MOD(ita,mrest)==0 .OR. &
MOD(ita,jescmask) == 0 .OR. ita == nta) ) THEN
ALLOCATE(rhoabso_all(NX,NY,NZ,2))
CALL mpi_barrier (mpi_comm_world, mpi_ierror)
CALL mpi_allreduce(rhoabso,rhoabso_all,2*NX*NY*NZ, &
mpi_double_precision,mpi_sum, &
mpi_comm_world,mpi_ierror)
IF(MOD(ita,mprint)==1) CALL nescape(rhoabso_all)
CALL escmask(rhoabso_all)
ELSE
IF(MOD(ita,mprint)==1) CALL nescape(rhoabso)
CALL escmask(rhoabso)
END IF

! prints wavefunction on measuring points on file

! IF (ipes /= 0) CALL evalMP() ! not yet implemented

IF(MOD(ita,mrest)==0) THEN
IF(tmpi) THEN
CALL wrtabso(ita,rhoabso_all)
ELSE
CALL wrtabso(ita,rhoabso)
END IF
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rhoabso_all is only allocated inside the IF(tmpi .AND. (...)) branch, but later wrtabso(ita,rhoabso_all) is called whenever MOD(ita,mrest)==0 and tmpi is true. If the earlier branch didn’t run, this passes an unallocated array and will crash. Guard the wrtabso call with ALLOCATED(rhoabso_all) or always allocate+allreduce when tmpi and mrest output is needed.

Copilot uses AI. Check for mistakes.
Comment on lines 226 to 229
! calculate mean fields and external fields
IF(nabsorb > 0) CALL absbc(nabsorb,iter,nt,time)
CALL skyrme
IF(text_timedep) CALL extfld(0.D0)
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently the only call site for absbc, and it runs once during the preparation phase (before the time-stepping loop). That means the absorbing boundary condition is not applied each iteration as described, and it also mutates psi after densities have already been computed, so skyrme uses a rho that no longer matches the wavefunctions. Consider moving the absbc application into the per-iteration loop at a point where densities will be recomputed from the masked psi (or recompute densities immediately after masking).

Copilot uses AI. Check for mistakes.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +460 to +480
IF(mpi_myproc==0) THEN
OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', &
FORM='UNFORMATTED')
WRITE(scratch2) rhoabsoprint
ENDIF
! now if we are running with one processor, also write the wave functions
! onto that file; otherwise open new individualized file
!
IF(ifabsorbital==1) THEN
IF(mpi_nprocs>1) THEN
IF(mpi_myproc/=0) CLOSE(UNIT=scratch2,STATUS='KEEP')
WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a'
OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', &
FORM='UNFORMATTED')
ENDIF
!
DO nst = 1,nstloc
WRITE(scratch2) rhoescmaskOrb(:,:,:,nst)
ENDDO
END IF
CLOSE(UNIT=scratch2,STATUS='KEEP')
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrtabso’s MPI file handling doesn’t mirror write_wavefunctions: only rank 0 opens wffile.a, but in the mpi_nprocs>1 branch other ranks call CLOSE(scratch2) even though they never opened it, and rank 0 reuses the same unit for a different file without an explicit close. This is likely to fail at runtime on some compilers and/or produce inconsistent output files; consider following the same open/close pattern as write_wavefunctions (close on rank 0 when switching, and only close units that were opened on that rank).

Suggested change
IF(mpi_myproc==0) THEN
OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', &
FORM='UNFORMATTED')
WRITE(scratch2) rhoabsoprint
ENDIF
! now if we are running with one processor, also write the wave functions
! onto that file; otherwise open new individualized file
!
IF(ifabsorbital==1) THEN
IF(mpi_nprocs>1) THEN
IF(mpi_myproc/=0) CLOSE(UNIT=scratch2,STATUS='KEEP')
WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a'
OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', &
FORM='UNFORMATTED')
ENDIF
!
DO nst = 1,nstloc
WRITE(scratch2) rhoescmaskOrb(:,:,:,nst)
ENDDO
END IF
CLOSE(UNIT=scratch2,STATUS='KEEP')
!
! Rank 0: write accumulated absorption to the main restart file.
!
IF(mpi_myproc==0) THEN
OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='REPLACE', &
FORM='UNFORMATTED')
WRITE(scratch2) rhoabsoprint
!
! If requested, also write orbital-resolved information.
! For mpi_nprocs==1, keep writing into the same file.
! For mpi_nprocs>1, switch to an individualized file for rank 0.
!
IF(ifabsorbital==1) THEN
IF(mpi_nprocs>1) THEN
CLOSE(UNIT=scratch2,STATUS='KEEP')
WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a'
OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', &
FORM='UNFORMATTED')
ENDIF
DO nst = 1,nstloc
WRITE(scratch2) rhoescmaskOrb(:,:,:,nst)
ENDDO
ENDIF
CLOSE(UNIT=scratch2,STATUS='KEEP')
ELSE
!
! Other ranks: only write orbital-resolved information if requested.
! They never touch the main restart file.
!
IF(ifabsorbital==1) THEN
WRITE(rsfp,'(I3.3,''.'',A)') mpi_myproc,wffile(1:lenstr)//'.a'
OPEN(UNIT=scratch2,FILE=rsfp,STATUS='REPLACE', &
FORM='UNFORMATTED')
DO nst = 1,nstloc
WRITE(scratch2) rhoescmaskOrb(:,:,:,nst)
ENDDO
CLOSE(UNIT=scratch2,STATUS='KEEP')
ENDIF
ENDIF

Copilot uses AI. Check for mistakes.
OPEN(UNIT=scratch2,FILE=wffile(1:lenstr)//'.a',STATUS='OLD', &
FORM='UNFORMATTED')
READ(scratch2) rhoabso
ENDIF
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

readabso reads rhoabso only on mpi_myproc==0, but rhoabso is later accumulated independently on every rank and then summed with mpi_allreduce for reporting. On restart this makes the initial rhoabso inconsistent across ranks (rank 0 starts with the full accumulated field, others with zeros), which will lead to incorrect totals / double counting. For MPI restarts, either store/read per-rank absorption data (analogous to wavefunction files) or broadcast a consistent representation to all ranks (and adjust the accumulation scheme accordingly).

Suggested change
ENDIF
ENDIF
!
! ensure rhoabso is consistent across all MPI ranks after restart:
! broadcast the field read on rank 0 to all other ranks
IF (mpi_nprocs > 1) THEN
CALL MPI_BCAST(rhoabso, SIZE(rhoabso), MPI_DOUBLE_PRECISION, 0, MPI_COMM_WORLD, ierr)
ENDIF

Copilot uses AI. Check for mistakes.
Comment on lines +270 to +303
LOGICAL :: wflagabs=.true.

ALLOCATE(xmask(NX))
ALLOCATE(ymask(NY))
ALLOCATE(zmask(NZ))

! prepare mask functions in each direction separately

zmask = 1D0
DO iz = 1,nabso
zmask(iz) = COS(pi*0.5D0*(nabso+1.0-iz)/nabso)**powabs
zmask(NZ+1-iz) = zmask(iz)
END DO

ymask = 1D0
DO iy = 1,nabso
ymask(iy) = COS(pi*0.5D0*(nabso+1.0-iy)/nabso)**powabs
ymask(NY+1-iy) = ymask(iy)
END DO

xmask = 1D0
DO ix = 1,nabso
xmask(ix) = COS(pi*0.5D0*(nabso+1.0D0-ix)/nabso)**powabs
xmask(NX+1-ix) = xmask(ix)
END DO

IF(mpi_myproc == 0 .AND. wflagabs) THEN
WRITE(6,'(a)') ' ZMASK:'
WRITE(6,'(1x,5(1pg12.4))') zmask
WRITE(6,'(a)') ' YMASK:'
WRITE(6,'(1x,5(1pg12.4))') ymask
WRITE(6,'(a)') ' XMASK:'
WRITE(6,'(1x,5(1pg12.4))') xmask
END IF
Copy link

Copilot AI Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

init_abso will always dump full XMASK/YMASK/ZMASK arrays to stdout because wflagabs is hardcoded to .true.. For typical grid sizes this can produce very large logs in production runs. Consider gating this output behind the existing wflag and/or an input switch (e.g., a namelist parameter) so it’s opt-in.

Copilot uses AI. Check for mistakes.
kylegodbey and others added 3 commits March 11, 2026 15:52
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link

Copilot AI commented Mar 11, 2026

@kylegodbey I've opened a new pull request, #5, to work on those changes. Once the pull request is ready, I'll request review from you.

Copilot AI and others added 3 commits March 11, 2026 20:02
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
Co-authored-by: kylegodbey <5806808+kylegodbey@users.noreply.github.com>
@kylegodbey
Copy link
Member Author

@aaron-philip Will you take a look at the MPI issues copilot is flagging? I think the other issues are handled at this point, but I haven't looked at the MPI implementation in a while so I'm unsure which indices are global/local.

Move `absbc` into per-iteration loop with density recomputation
@aaron-philip
Copy link
Collaborator

@aaron-philip Will you take a look at the MPI issues copilot is flagging? I think the other issues are handled at this point, but I haven't looked at the MPI implementation in a while so I'm unsure which indices are global/local.

@kylegodbey Sure I can try, this will take more time though as I'm not too familiar either. I didn't modify abso_bc.f90 so my suspicion is if Copilot is right, those issues have been there for years...

@kylegodbey
Copy link
Member Author

@aaron-philip Will you take a look at the MPI issues copilot is flagging? I think the other issues are handled at this point, but I haven't looked at the MPI implementation in a while so I'm unsure which indices are global/local.

@kylegodbey Sure I can try, this will take more time though as I'm not too familiar either. I didn't modify abso_bc.f90 so my suspicion is if Copilot is right, those issues have been there for years...

Alright, let's ignore those for now and make an issue regarding a careful refactor of the mpi logic for absorbing BCs. It's true that mpi would rarely be used with absorbing BCs since your particles would just evaporate if you did a crust simulation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants