Skip to content

Conversation

@brownbaerchen
Copy link
Contributor

In #565, I implemented dedicated functionality for block-diagonal matrices and said I didn't know why this uses less memory than constructing block-diagonal matrices from all possible blocks in scipy. I have since figured out why.

The memory size of a sparse matrix is proportional to the number of non-zero entries in the matrix. However, setting individual values in a matrix to zero does not remove them from storage, but keeps them around as non-zero entries with value zero. You have to eliminate zeros explicitly. See the following example:

import scipy.sparse as sp

N = 32
I = sp.eye(N).tocsc()
I.nnz  # N
I *= 0
I.nnz # N
I.eliminate_zeros()
I.nnz # 0

After adding eliminate_zeros after construction of linear operators, there is no more memory to be saved by using special block-diagonal functions. Therefore, I essentially reverted #565 and memory is now even more reduced.

Note that cupy does support this function, but I kept getting memory errors after using it. I suspect this is a bug in cupy. The solution I found for the moment was copying the matrices back to CPU, eliminating zeros there and then copying to GPU again. This is not great, but given that this only happens before a run is started, it seems fine for now.

@tlunet tlunet merged commit 66a3d6d into Parallel-in-Time:master Jul 18, 2025
47 checks passed
@tlunet tlunet deleted the eliminate_zeros branch July 18, 2025 11:53
@brownbaerchen brownbaerchen restored the eliminate_zeros branch July 18, 2025 11:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants