Skip to content

Potential enhancement for using BPP NNLS solver #15

@mvfki

Description

@mvfki

Just got to know that we are actually using BPP NNLS solver. Then I would really recommend updating the implementation that make full use of the memory efficiency of that

For example at _iNMF_ANLS.py line 135 where we start to solve H. We do something like:

H = nnls(A, B)

Here A is a concatenated matrix of W+V_i and sqrtLambda*V_i, and B is a concatenated matrix of X_i and zeros. Looking into the BPP NNLS code we can see that it has a third argument is_input_prod which means you can turn it to True if you use np.matmul(A.T, A) and np.matmul(A.T, B) instead and it will be equivalent. The best part of this design is that, we don't have to literally derive A or B, which are giant, in order to then get A.T*A or A.T*B. We can do the scratch-book calculation that AtA = (W+V_i).T * (W+V_i) + lambda*(V_i.T * V_i) and AtB = (W+V_i) * X_i. This would be more efficient in both speed and memory usage. And the similar thing can be applied to the update of V and W, as well as the H solving step in online iNMF.

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions