Skip to content

Conversation

@amgebauer
Copy link
Member

I want to enable MOR for the new structural time integration. Therefore, I now moved the general utilities to core. In a second step, one could generalize it for any problem type.

Copy link
Member

@mayrmt mayrmt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@amgebauer The POD files in the linear solver do not seem to fit there. They read a POD basis from file and do stuff with the basis vectors, but they never solve a linear system.

I think we should find a more suitable place for them. I am fine with moving them to core/. What about a new directory core/pod/ or core/mor/?

@amgebauer
Copy link
Member Author

That depends on how we want to enable POD for all problem types. I thought of implementing it as a solver for a linear system. So you get a large sparse matrix, then pod will be applied and then your sub-solver solves a much smaller linear system. In this case, it should directly be available for all problem types.

@mayrmt
Copy link
Member

mayrmt commented Sep 12, 2025

Sounds like algebraic multigrid with the POD basis as nullspace vectors transfer operators...

@maxfirmbach
Copy link
Contributor

Regarding the second point of generalizing this feature. We have a projector object that can be given to the linear solver. Wouldn't it make sense to use that? Yes currently it's a very specific thing, but in the end this does exactly the same.

@maxfirmbach
Copy link
Contributor

In addition, this is more like a linalg thing e.g. see krylov projector.

Copy link
Member

@sebproell sebproell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you just moved the files, but since I had a look at what is happening inside, I left some comments.

Copy link
Member

@mayrmt mayrmt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this looks good to me. Definitely an improved to the old MOR implementation!

I left some comments about naming and documentation for consideration.

Comment on lines +426 to +440
/**
* @brief Constructs a sparse matrix from a given MultiVector.
*
* This function creates a @p Core::LinAlg::SparseMatrix using the provided multivector,
* range map, and an optional domain map. The multivector contains the data to be
* represented in sparse matrix form. The rangemap defines the mapping of rows,
* while the domainmap (if provided) defines the mapping of columns.
*
* @param multivect The multivector containing the data to be converted into a sparse matrix.
* @param rangemap The map specifying the row mapping for the sparse matrix.
* @param domainmap Optional map specifying the column mapping for the sparse matrix.
* @return Core::LinAlg::SparseMatrix The constructed sparse matrix.
*/
Core::LinAlg::SparseMatrix make_sparse_matrix(const Core::LinAlg::MultiVector<double>& multivect,
const Core::LinAlg::Map& rangemap, const std::optional<Core::LinAlg::Map>& domainmap);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the code, I understand that a MultiVector with 3 vectors and 100 entries per vector will result in a rectangular 100x3 matrix. The matrix is dense, but stored in a sparse format. If that is true, then I find the comments about the domain map a bit misleading.

If the domain map just decribes, how a vector x in a mat-vec Ax needs to be layed out, such that the mat-vac can be computed, then the domain map just follows standard Petra terminology.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the multivector contains zeros, one could potentially filter them out.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. I was thinking about the column map, which is "dense". In the end, it's a sparse matrix format that is likely to have a large fill-in. Not a big deal, though.

Comment on lines +189 to +191
void multiply_multi_vectors(const Core::LinAlg::MultiVector<double>& v1, bool trans_v1,
const Core::LinAlg::MultiVector<double>& v2, bool trans_v2, const Core::LinAlg::Map& red_map,
const Core::LinAlg::Import& importer, Core::LinAlg::MultiVector<double>& result);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

red_map -> redundant_map to reflect its meaning in the naming?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants