Skip to content

Solvers and Solveroutines

Robert Fennis edited this page Jul 13, 2025 · 4 revisions

Introduction

In the setup of this EMerge FEM library, the way how a problem is solved can be changed by the user. Currently the architecture is as following.

Architecture

Inside the your Simulation3D model that you make (check the introduction) there is a physics attribute which is the Electrodynamics3D class. Inside this class there is a SolveRoutine class instance. You can see it as following:

model = emerge.Simulation3D('demo')
model.mw.solveroutine

The solveroutine and all other solver classes are shown in the file emerge/solver.py. A SolveRoutine contains the general information on how a problem is solved.

In the finite element method there are two types of problems that are solved:

  • The linear system: Ax = b is solved for the vector x. This is the case for frequency domain simulations
  • In the generalized eigenvalue problem you solve for the eigenvalues and vectors Ax = qBx where q are the eigenvalues and x the eigen-vectors.

Both can be solved in two ways:

  • An iterative algorithm that approximates the solution iteratively
  • A direct method that solves it in a step-by-step way.

Iterative solvers use much less RAM memory and can be much faster. However, they don't always converge. Direct solvers use much more memory but always guarantee to find the right solution. In the time harmonic formulation, we use so called edge-vector basis functions which discretize the curl-curl operator. When the mesh size becomes smaller than say a quarter wavelength, the matrix A becomes ill-conditioned which means that iterative solvers do not converge. This is why I hard coded the use of the current best direct solver PyPardiso. (it also has other advantages). To get iterative solvers to work I have to implement either a good interface with the Hypre library's implementation of the Auxilliary Grid Maxwell preconditioner or implement a fast version of some Domain Decomposition method. I haven't done this yet. Thus Direct solvers are always supperior. Another advantage is that they execute a decomposition once so repeated calls with the same matrix with different values but the same sparsity pattern is extremely fast. This does also mean that this library can't work on Apple Silicon chips as PARDISO is not available for ARM architecture.

You can, if you feel interested, implement your own Solver class to be used or change variables to play around with them. Scipy also offers sparse direct solvers such as SPSolve and SuperLU but these are significantly slower than PARDISO. There is also MUMPS available but this has been a nightmare to compile.

To see how all this works, just check the solver.py file. All will be clear inside that. You can simply swap the direct solver attributes of the solveroutine class with your own.

Clone this wiki locally