Morton Order for supporting point transformer v3#217
Morton Order for supporting point transformer v3#217TarzanZhao wants to merge 19 commits intomainfrom
Conversation
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
blackencino
left a comment
There was a problem hiding this comment.
This PR adds a great deal to C++ that need not be there. Since the C++ build times dominate our iteration velocity, we should only port to C++ when strictly necessary - especially as it also gets in the way of torch's fusion capabilities.
All we need for the encoding functionality here are two functions, which do not need to be autograd function overloads but can just be regular methods that take and produce torch tensors. They are:
torch::Tensor morton(torch::Tensor const& ijk) {
// for each index i, result[i] = morton(ijk[i[)
}
and the same thing for hilbert. These are generically useful functions that could be useful for a thousand other things beyond just point attention.
On the python side, we can then apply the encoding to the ijk of the grids or gridbatches directly. We can transpose those ijk to kji in python, and also use argsort for permutation. This minimizes compile time, C++ code support burden, and creates tools that are more flexibly useful.
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
2c3c284 to
768a060
Compare
… _Cpp.pyi Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
…lbert order implementation. Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
fwilliams
left a comment
There was a problem hiding this comment.
A few comments:
I think the morton/hilbert/etc codes should be attributes on the grid in the same way that ijk is an attribute. i.e grid.morton_codes grid.hilbert_codes and variants for the transposed versions.
I also agree with Chris, that you can permute outside the grid.
So you have something like
grid = Grid.from_xxx(...)
sidecar = grid.inject_from_ijk(ijk_values, data)
morton_pmt = torch.argsort(grid.morton_codes)
sidecar_morton = sidecar[morton_pmt]
blackencino
left a comment
There was a problem hiding this comment.
I think this can go through with minimal documentation changes. I would like the offset, currently based on bounding box min, to be justified/explained with a comment in the code. Alternatively, you might choose to use the constant offset of (2^21-1), but either is ok, so long as it is explained. There's a documentation error in the permutation function that needs fixing.
|
Okay @blackencino and I discussed. Here's the conclusion:
# Offset
def hilbert(base_offset=None):if called with None, use the -bbmin as the offset, otherwise, use the passed in offset |
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
blackencino
left a comment
There was a problem hiding this comment.
Approved, but please remove the permute methods
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
We covered the stuff, I just can't find the place to resolve it.
Signed-off-by: Hexu Zhao <hexuz@nvidia.com>
9e69d8a to
e1542cf
Compare
|
Merged in separate PR because of git commit signing issues. |
Point transformer v3 requires attention along the different serialization orders. This PR supports different z-order and transposed z-order. Next, I will implement hilbert order and transposed hilbert order to match with original ptv3 faithfully.