-
Notifications
You must be signed in to change notification settings - Fork 55
Move Proper Orthogonal Decomposition (POD) to Core #1260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@amgebauer The POD files in the linear solver do not seem to fit there. They read a POD basis from file and do stuff with the basis vectors, but they never solve a linear system.
I think we should find a more suitable place for them. I am fine with moving them to core/. What about a new directory core/pod/ or core/mor/?
|
That depends on how we want to enable POD for all problem types. I thought of implementing it as a solver for a linear system. So you get a large sparse matrix, then pod will be applied and then your sub-solver solves a much smaller linear system. In this case, it should directly be available for all problem types. |
|
Sounds like algebraic multigrid with the POD basis as |
|
Regarding the second point of generalizing this feature. We have a |
|
In addition, this is more like a |
sebproell
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know you just moved the files, but since I had a look at what is happening inside, I left some comments.
7643b6a to
5220507
Compare
mayrmt
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, this looks good to me. Definitely an improved to the old MOR implementation!
I left some comments about naming and documentation for consideration.
| /** | ||
| * @brief Constructs a sparse matrix from a given MultiVector. | ||
| * | ||
| * This function creates a @p Core::LinAlg::SparseMatrix using the provided multivector, | ||
| * range map, and an optional domain map. The multivector contains the data to be | ||
| * represented in sparse matrix form. The rangemap defines the mapping of rows, | ||
| * while the domainmap (if provided) defines the mapping of columns. | ||
| * | ||
| * @param multivect The multivector containing the data to be converted into a sparse matrix. | ||
| * @param rangemap The map specifying the row mapping for the sparse matrix. | ||
| * @param domainmap Optional map specifying the column mapping for the sparse matrix. | ||
| * @return Core::LinAlg::SparseMatrix The constructed sparse matrix. | ||
| */ | ||
| Core::LinAlg::SparseMatrix make_sparse_matrix(const Core::LinAlg::MultiVector<double>& multivect, | ||
| const Core::LinAlg::Map& rangemap, const std::optional<Core::LinAlg::Map>& domainmap); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the code, I understand that a MultiVector with 3 vectors and 100 entries per vector will result in a rectangular 100x3 matrix. The matrix is dense, but stored in a sparse format. If that is true, then I find the comments about the domain map a bit misleading.
If the domain map just decribes, how a vector x in a mat-vec Ax needs to be layed out, such that the mat-vac can be computed, then the domain map just follows standard Petra terminology.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the multivector contains zeros, one could potentially filter them out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. I was thinking about the column map, which is "dense". In the end, it's a sparse matrix format that is likely to have a large fill-in. Not a big deal, though.
| void multiply_multi_vectors(const Core::LinAlg::MultiVector<double>& v1, bool trans_v1, | ||
| const Core::LinAlg::MultiVector<double>& v2, bool trans_v2, const Core::LinAlg::Map& red_map, | ||
| const Core::LinAlg::Import& importer, Core::LinAlg::MultiVector<double>& result); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
red_map -> redundant_map to reflect its meaning in the naming?
I want to enable MOR for the new structural time integration. Therefore, I now moved the general utilities to core. In a second step, one could generalize it for any problem type.