-
Notifications
You must be signed in to change notification settings - Fork 8
Dense Level #183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Dense Level #183
Conversation
e02cd84 to
3551c2a
Compare
|
Hi @willow-ahrens,
Matmul lowered with |
27b7a86 to
ca5db23
Compare
|
Hi @willow-ahrens, matmul with 2-dim dense format looks now similar to Julia's version: |
| ntn.Variable( | ||
| name, | ||
| BufferizedNDArrayFType( | ||
| type(val)( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's something to address as a separate larger task
| val.ndim, | ||
| TupleFType.from_tuple(val.shape_type), | ||
| ), | ||
| val.from_kwargs(val.to_kwargs()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Proper handling of the format inference is a separate larger task (to map traits.jl), here I only convert it to a "dict of attributes" where I can override some of them (for example promoted dtype from two inputs).
| levels_to_add = [ | ||
| idx for idx, f in enumerate(result_fields) if f not in fields | ||
| ] | ||
| result_rep = result_rep.add_levels(levels_to_add) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in the comment above - temporary selection of format (including removing/adding levels), this will be moved to a separate module.
| ] | ||
| return fmt(*args) | ||
| } | ||
| return fmt(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
formats now have "multiple constructors", whether we construct from Numba or in the facing API
| def __init__( | ||
| self, | ||
| arr: np.ndarray | NumpyBuffer, | ||
| val: np.ndarray | NumpyBuffer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Standarization of names, to match FiberTensor
| shape: tuple[np.integer, ...] | None = None, | ||
| strides: tuple[np.integer, ...] | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For simulating multiple constructors.
|
@willow-ahrens So right now E2E Numba matmul runs with bufferized and One part here might look like a bit of a mess, namely |
willow-ahrens
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, this looks good. I'd like it to pass tests, work for C codegen, and implement the freeze, thaw, etc. I'm more than happy to take a pass at this this week. Let's sync up next tuesday on next steps.
| """ | ||
|
|
||
| @property | ||
| @abstractmethod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the level needs to know this stuff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right - it should still be there, done.
src/finchlite/tensor/fiber_tensor.py
Outdated
| return tns | ||
|
|
||
| def lower_thaw(self, ctx, tns, op): | ||
| raise NotImplementedError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll need an implementation of these methods. I can work on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
| val_lvl = asm.GetAttr(val, asm.Literal("lvl")) | ||
| return self.lvl_t.asm_unpack(ctx, var_n, val_lvl) | ||
|
|
||
| def get_fields_class(self, tns, buf_s, nind, pos, op): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this also temporary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the def unfurl in FiberTensorFType I need to have Fields class instance so this is a proposition of an abstraction that levels would implement. So for the current form the answer is no, that won't be temporary.
|
@willow-ahrens Thank you for taking a look through these changes! I know that this PR is a nightmare - with these temporary hacks for suitable reps and rather vague shape considering that looplets and sparse level are still to come. Now all tests (and mypy) pass. In |
This PR introduces dense level logic, for interpreters and Numba backend codegen.