Skip to content

Releases: cai4cai/torchsparsegradutils

v0.2.1

29 Oct 16:15

Choose a tag to compare

Changelog - v0.2.1

Release Date: October 29, 2025

🎯 Overview

This patch release focuses on infrastructure improvements, deprecation warning fixes, and updating to the latest CUDA and PyTorch versions. No breaking changes or new features.


🔧 Build System & Package Configuration

Migrate to Modern pyproject.toml Standard (#638b94d)

Major packaging modernization - migrated from legacy setup.py to declarative pyproject.toml:

Why this matters: The package now follows current Python packaging best practices, and the documentation link will appear properly in the "Project links" section on PyPI.


🐳 CUDA & PyTorch Updates

Update to CUDA 13.0 and PyTorch 2.9.0 (#7c1ef7a)

Updated development containers and CI to support the latest versions:

Development Containers:

  • 📦 CUDA 12.8 → CUDA 13.0
  • 🔥 PyTorch stable now uses cu130 wheels
  • 🔄 Both Dockerfile.stable and Dockerfile.nightly updated

CI/CD:

  • ✅ Test matrix now includes PyTorch 2.5.0, 2.9.0, and nightly
  • ✅ Removed PyTorch 2.8.0 (replaced with 2.9.0)
  • ✅ Total test configurations: 7 (3 Python versions × 2 PyTorch stable + 1 nightly)

Documentation:

  • 📝 Updated README badges: "Tested 2.5 / 2.9 / nightly"
  • 📝 Updated benchmark documentation to reference PyTorch 2.9.0+cu130
  • 📝 Updated dev container documentation to mention CUDA 13.0

🔇 Deprecation Warning Improvements

Fix Deprecation Warnings and Typo (#79a4023)

Major improvement to developer experience - eliminated unwanted deprecation warnings:

1. Fixed Typo Throughout Codebase

  • calc_pariwise_coo_indices (typo)
  • calc_pairwise_coo_indices (corrected)
  • 🔄 Maintained backward compatibility with alias
  • 📝 Updated 15+ references in test files

2. Removed Module-Level Warning

  • Before: Warning triggered on module import (even when not using deprecated code)
  • After: Warning only triggered when deprecated class is instantiated
  • Benefit: Prevents double warnings and allows lazy loading to work properly

3. Implemented Lazy Imports

  • Uses __getattr__ in __init__.py for deprecated symbols
  • Warnings only appear when deprecated code is actually accessed
  • Normal usage produces zero warnings

4. Fixed Stack Levels

  • PairwiseVoxelEncoder: Now uses stacklevel=3 (points to user code, not library internals)
  • Function deprecations: Use stacklevel=2 (points to import line)
  • Warnings now show exactly where in user's code the deprecated call is made

5. Consistent Deprecation Pattern

All deprecated items now follow the same clear pattern:

  • PairwiseVoxelEncoderPairwiseEncoder
  • calc_pairwise_coo_indicescalc_pairwise_coo_indices_nd
  • calc_pariwise_coo_indices (typo) → calc_pairwise_coo_indices_nd

Test Results:

  • ✅ 817 tests passed
  • ✅ Only 3 warnings (all expected: 1 PyTorch CSR beta, 2 intentional deprecation tests)
  • Zero warnings during normal usage

📖 Documentation & Paper

Add JOSS Badge (#dca72e1)

  • Added Journal of Open Source Software (JOSS) publication badge to README

Fix JOSS Paper BibTeX Entry

  • Fixed: Removed invalid comments inside the pytorch BibTeX entry in paper/paper.bib
  • Issue: Comments with % inside BibTeX entries caused parser failures in JOSS submission
  • Impact: JOSS paper submission now passes BibTeX validation

🚀 CI/CD Improvements

Modernize PyPI Deployment Workflow

Updated .github/workflows/deployment-pypi.yml to use modern build tools:

  • Old (deprecated): python setup.py sdist bdist_wheel
  • New (PEP 517): python -m build
  • Why: Aligns with pyproject.toml migration and follows current Python packaging standards
  • Impact: More reliable package builds and PyPI deployments

🧪 Testing

All changes include comprehensive test coverage:

  • ✅ 817 tests passing
  • ✅ Zero warnings for normal usage
  • ✅ Proper deprecation warnings for legacy code
  • ✅ CI testing on PyTorch 2.5.0, 2.9.0, and nightly
  • ✅ Python 3.10, 3.11, 3.12 support verified

📦 Installation

pip install torchsparsegradutils==0.2.1

Or upgrade from 0.2.0:

pip install --upgrade torchsparsegradutils

🔄 Migration Guide

From 0.2.0 to 0.2.1

No breaking changes! All existing code continues to work.

However, if you see deprecation warnings, update your imports:

Recommended changes:

# Old (deprecated, will warn)
from torchsparsegradutils.encoders import PairwiseVoxelEncoder
from torchsparsegradutils.encoders import calc_pairwise_coo_indices

# New (recommended, no warnings)
from torchsparsegradutils.encoders import PairwiseEncoder
from torchsparsegradutils.encoders import calc_pairwise_coo_indices_nd

Fixed typo:

# Old (typo, will warn with special message)
from torchsparsegradutils.encoders import calc_pariwise_coo_indices

# Corrected (but still deprecated, will warn)
from torchsparsegradutils.encoders import calc_pairwise_coo_indices

# Best (recommended, no warnings)
from torchsparsegradutils.encoders import calc_pairwise_coo_indices_nd

🔗 Links

v.0.2.0

10 Sep 18:03
a958602

Choose a tag to compare

What's Changed

Full Changelog: v0.1.3...v0.2.0

torchsparsegradutils v0.2.0 Release Notes

TL;DR

Major expansion of sparse ops, probabilistic distributions, benchmarking, documentation, dtype support, and solver backends. Memory for sparse_mm improved (~15%), new SPD + conditioning utilities, JAX/CuPy enhancements, int32 index support (with caveats), broad doc + test overhaul, and deprecation of PairwiseVoxelEncoder in favor of the generalized PairwiseEncoder. Prepares project for JOSS submission.


Headline

This release delivers a substantial maturation of the library: richer sparse linear algebra primitives, enhanced probabilistic modeling, comprehensive benchmarking (random + SuiteSparse), expanded backend support (CuPy, JAX with float64), stronger numerical tooling, uniform documentation (NumPy style + examples), and publication-oriented artifacts (JOSS paper draft + final edits).

Breaking / Deprecations

  • PairwiseVoxelEncoder deprecated → use PairwiseEncoder (supports arbitrary spatial dimensions).
  • Extended parameterizations for SparseMultivariateNormal; review constructor usage if you relied on earlier covariance/precision forms.

New & Enhanced Features

  • SparseMultivariateNormal: now supports Cholesky (LL^T) for covariance AND precision.
  • SparseMultivariateNormalNative: demonstration of direct torch.sparse.mm with unbatched CSR.
  • make_spd_sparse: generate sparse symmetric positive definite matrices for tests & benchmarks.
  • rand_sparse / rand_sparse_tri: conditioning controls + non‑strict triangular option; improved numerical stability knobs.
  • Added well-conditioning (well_conditioned, min_diag_value) to random sparse generators.
  • Added non-strict (diagonal-including) triangular sparse matrix generators (COO + CSR).
  • Keyword passthrough (**kwargs) for CuPy & JAX solver wrappers.
  • Int32 COO / CSR index support (COO auto-upcasts in some PyTorch code paths—documented limitation).
  • JAX backend: enabled float64 execution.
  • Official PyTorch 2.5+ support (tests up to nightly 2.9).

Linear Algebra & Solvers

  • Expanded solver ecosystem: sparse_generic_solve with CG, MINRES, BICGSTAB; CuPy wrappers (CG, CGS, MINRES, GMRES, spsolve, spsolve_triangular); JAX wrappers (CG, BICGSTAB).
  • Issue #51 resolved across sparse_generic_solve, sparse_solve_c4t, sparse_solve_j4t.
  • Updated CuPy bindings (t2c_csr, t2c_coo) to handle shape correctly.
  • Added multi‑RHS handling for BICGSTAB (column-wise solve loop).

Performance

  • ~15% memory footprint reduction for sparse_mm.
  • Additional refinements tied to issue #31 for improved efficiency.

Benchmarking Suite

  • Benchmarks cover:
    • sparse_mm vs torch.sparse.mm vs dense torch.mm (batched & unbatched)
    • sparse_triangular_solve vs torch.triangular_solve vs cupy.spsolve_triangular
    • sparse_generic_solve vs CuPy (multiple Krylov solvers) vs JAX vs torch.linalg.solve
  • Includes SuiteSparse matrices (e.g. Rothberg/cfd2) + random generators.
  • Automated result artifacts + visualization scripts.

Probabilistic & Statistical Validation

  • Distribution sampling validated via One-sample Hotelling T² (mean) and Nagao covariance tests.
  • Mean/covariance statistical utilities factored into utils.
  • Integration tests for gradient stability (documents CSR backward edge cases with PairwiseEncoder).

Documentation

  • Full NumPy-style docstring unification with runnable examples.
  • Read the Docs configuration added (.readthedocs.yaml).
  • Doctest coverage for examples (test_doctests).
  • Expanded README with feature matrix, benchmarks, usage, known issues, citation.
  • JOSS paper draft + final formatting passes.

Testing & Tooling

  • Migration to PyTest (issue #43) + expanded parametrized tests & integration coverage.
  • Doctest execution integrated.
  • isort adoption for import normalization.
  • Devcontainer overhaul (stable + nightly variants, CUDA 12.8 toolchain, pre-installed extras, linting & formatting).
  • Added reproducible random sparse generation helpers (make_spd_sparse, conditioning toggles).

Quality & Stability

  • Enhanced gradient flow / memory stress tests for sparse backprop patterns.
  • Diagnostic utilities & documented limitations (index dtype behavior, CSR + PairwiseEncoder memory).
  • More explicit parameter validation in random sparse matrix factory functions.

Known Limitations (See README for details)

  • CSR backward with PairwiseEncoder may exhibit elevated memory usage in some integration scenarios.
  • COO int32 indices may be upcast to int64 by PyTorch internals (expected behavior for now).
  • LL^T precision parameterization can cause large gradients; LDL^T recommended.

Upgrade Guidance

  1. Replace any usage of PairwiseVoxelEncoder with PairwiseEncoder (check spatial dimension ordering & parameters).
  2. Revisit SparseMultivariateNormal calls if relying on prior parameter naming—new flexibility may alter validation paths.
  3. For solver wrappers, you may now pass backend-specific kwargs (tolerances, max iterations, etc.).
  4. If you depended on implicit float32-only JAX runs, ensure downstream code accommodates possible float64 now.
  5. Regenerate environments to pick up added docs + benchmarking extras.

Internal / Housekeeping

  • Version bump to 0.2.0.
  • Readme overhaul & citation block added.
  • Consolidated statistical test utilities.
  • Refined sparse generator API (conditioning + triangular variants).

Changelog Classification

Type Items
Added New parameterizations, native distribution variant, SPD generator, conditioning flags, non-strict triangular gen, float64 JAX, kwargs propagation, benchmarks, devcontainer variants
Changed Memory optimization sparse_mm, solver interfaces, docstring style, benchmarking harness
Deprecated PairwiseVoxelEncoder
Fixed Shape handling in CuPy bindings, issue #51, consistency in random generation
Docs RTD config, JOSS paper, README expansion, doctests
Tooling PyTest migration, isort, formatting, environment provisioning

References / Issues

  • PR: #60 (primary aggregation)
  • Issues: #31 (sparse_mm), #43 (PyTest migration), #51 (solver behavior), #61 (docstring standardization)

Links

v0.1.3

22 Oct 08:55
139df7b

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.2...v0.1.3

v0.1.2

15 Aug 15:14

Choose a tag to compare

🐛 Fixes:

  1. SparseMultivariateNormal Initialisation Warning: Addressed a user warning emerging from the SparseMultivariateNormal class initialisation due to arg_constraints not being defined.
  2. Enhancements to PairwiseVoxelEncoder:
    • Module Inheritance: Adjusted PairwiseVoxelEncoder to inherit from torch.nn.Module, aligning it more closely with PyTorch's expected behavior for neural network modules.
    • Device Management: Implemented the _apply function in PairwiseVoxelEncoder. This change facilitates the movement of indices created during class initialization to a designated device using methods such as .to(device), .cpu(), or .cuda().
    • Device Property: Introduced the .device attribute for PairwiseVoxelEncoder. This property provides users with insights regarding the device to which the encoder's indices are assigned.

Note: v.0.1.1 was abandoned due to a code typo error (arg_contraints instead of arg_constraints)

v0.1.0

02 Aug 17:46
4bbddbb

Choose a tag to compare

Release Notes:

We are excited to announce the first release of TorchSparseGradUtils, a suite of efficient utilities that extend the functionality of PyTorch sparse tensor operations to support sparse gradient back-propagation from sparse input tensors.

Here are the key features included in this release:

PyTorch Matrix operations with sparse gradients support:

  1. Sparse-Dense matrix multiplication with batch support sparse_matmul.
  2. Sparse-Dense triangular linear solver with batch support sparse_triangular_solve
  3. Sparse-Dense generic linear solver sparse_generic_solve

Sparse Gaussian Distribution:

  1. SparseMultivariateNormal Distribution parameterised by either a sparse lower triangular covariance or precision matrix with reparameterised sampling

Sparse Encoder:

  1. PairwiseVoxelEncoder to encode sparse matrices with relationships between pairs of voxels in local neighbourshoods of 3D volumetric images.

Sparse utilities:

  1. Convert COO indices and values to CSR indices and values with convert_coo_to_csr_indices_values, with batch support.
  2. Convert COO sparse tensors to CSR sparse tensors with batched support with convert_coo_to_csr
  3. Equivalent of torch.block_diag() for sparse COO and CSR matrices sparse_block_diag and a function to perform the reverse sparse_block_diag_split
  4. Equivalent of torch.eye for sparse COO and CSR matrices sparse_eye
  5. Equivalent of torch.stack() for sparse CSR tensors stack_csr

Generating Random Sparse Matrices:

  1. Equivalent of torch.rand() for sparse COO and CSR matrices rand_sparse
  2. rand_sparse_tri used for generating random strictly triangular sparse matrices in either COO or CSR format.

Additional backbone solvers implemented in pure PyTorch:

  1. BICGSTAB (ported from pykrylov) bicstab
  2. CG (ported from cornellius-gp/linear_operator) linear_cg
  3. LSMR (ported from pytorch-minimize) lsmr
  4. MINRES (ported from cornellius-gp/linear_operator) minres

CuPy and JAX solvers:

We also provide wrappers around cupy sparse solvers and jax sparse solvers. Allowing linear systems of PyTorch sparse matrices to be solved using a CuPy or JAX back-end:

  1. Sparse-Dense linear solver with CuPy back-end sparse_solve_c4t
  2. Sparse-Dense linear solver with JAX back-end sparse_solve_j4t

Installation:

This version can be installed using:

pip install torchsparsegradutils==0.1.0

We welcome any feedback, suggestions, and contributions via our issues page.

For more details about this release, you can refer to the Full Changelog.