Skip to content

MPI Friendly GC #178

@jkozdon

Description

@jkozdon

We should consider implementing what GridapPETSc.jl has done for GC with mpi objects.

Basically the julia finalizer registers the object for destruction with PetscObjectRegisterDestroy, see for example PETScLinearSolverNS

Of course this means the object is not destroyed until PETSc is finalized. If the user wants to destroy things sooner they can call a function gridap_petsc_gc:

# In an MPI environment context,
# this function has global collective semantics.
function gridap_petsc_gc()
  GC.gc()
  @check_error_code PETSC.PetscObjectRegisterDestroyAll()
end

By first calling GC.gc() all objects will be properly registered via PetscObjectRegisterDestroy and the call to PetscObjectRegisterDestroyAll actually destroys then.

The only change I would make is to suggest still allow manual destruction of objects is this is desired for performance reason (though I don't know if this is really ever needed).

h/t: @amartinhuertas in #146 (comment)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions