Lazy sparse tensor generators? #261
paul-tqh-nguyen
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This was inspired by #259.
Possibly not sane optimization idea (let's call this the "lazy generators idea"):
graphblas.apply
/graphblase.update
used in theans
line.Ideally, we'd also have the following ideas implemented (these would serve as reasonable tests):
np.diag(np.ones(3))
intonp.eye(3)
.ans
would be a lazy generator here to be used elsewhere:It might be a good idea to have a generic lazy tensor generator initializer op. It might look like this for
np.ones([7, 9])
:graphblas.apply
op.I think the key idea behind all of this is that blocks are analogous to functions, and we can heavily optimize code using functional programming techniques.
GraphWave also motivated a
graphblas.outer_product
. This might be easily implemented via #251?We could take that idea even further and make the #251 op return a lazy sparse tensor generator.
Heck, why not make
graphblas.apply
return a lazy generator as well? And all of our other ops?It's unclear if all of our current GraphBLAS dialect ops can work as lazy generators.
graphblas.reduce_to_vector
.If many of our ops are lazy ops, they can be dichotomized into lazy ops and reifying ops. This'll open the door toward using profile-guided optimization to tell us when to insert a reification op that simply turns a lazy op into one that writes out to memory. There's a trade-off between laziness/memory-saving and locality that PGO might be able to solve.
Beta Was this translation helpful? Give feedback.
All reactions