Skip to content

Remove torch_scatter dependency in favor of built-in PyTorch scatter_reduce_#571

Open
swahtz wants to merge 3 commits intoopenvdb:mainfrom
swahtz:js/remove_torch_scatter
Open

Remove torch_scatter dependency in favor of built-in PyTorch scatter_reduce_#571
swahtz wants to merge 3 commits intoopenvdb:mainfrom
swahtz:js/remove_torch_scatter

Conversation

@swahtz
Copy link
Copy Markdown
Contributor

@swahtz swahtz commented Mar 26, 2026

torch_scatter (pytorch_scatter) was only used in test_jagged_tensor.py as a reference implementation for validating jsum/jmin/jmax reductions. Replace it with a small helper wrapping torch.Tensor.scatter_reduce_(), which has been stable since PyTorch 2.0, eliminating a fragile external dependency that required custom wheels and source builds in CI.

fixes Remove torch_scatter as a test dependency
Fixes #53

…reduce_

torch_scatter (pytorch_scatter) was only used in test_jagged_tensor.py as a
reference implementation for validating jsum/jmin/jmax reductions. Replace it
with a small helper wrapping torch.Tensor.scatter_reduce_(), which has been
stable since PyTorch 2.0, eliminating a fragile external dependency that
required custom wheels and source builds in CI.

fixes Remove torch_scatter as a test dependency
Fixes openvdb#53

Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
@swahtz swahtz added this to the v0.5 milestone Mar 26, 2026
@swahtz swahtz requested a review from a team as a code owner March 26, 2026 05:17
@swahtz swahtz added the tests label Mar 26, 2026
@swahtz swahtz requested review from blackencino and sifakis March 26, 2026 05:17
@swahtz swahtz self-assigned this Mar 26, 2026
@swahtz swahtz added the CI Issues related to the Github actions CI/CD. For build issues use CMake/Build label Mar 26, 2026
@swahtz swahtz mentioned this pull request Mar 26, 2026
swahtz added 2 commits March 26, 2026 21:25
…ement

The built-in scatter_reduce_ with amax/amin distributes gradients to all tied extremal values, unlike torch_scatter (and fVDB) which routes to a single winner. With half-precision types (bfloat16/float16), ties are more frequent, causing the sorted non-zero gradient tensors to differ in length.

Guard the sorted non-zero gradient allclose checks with a shape equality test in test_jmin, test_jmax, test_jmin_list_of_lists, and test_jmax_list_of_lists. Forward value assertions remain unconditional.

Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI Issues related to the Github actions CI/CD. For build issues use CMake/Build tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Remove torch_scatter as a test dependency

1 participant