🌵 This has now been incorporated into cotengra, see the high level functionality, called with implementation='cotengra', and the actual two term implementation here. There is also a PR for numpy.einsum with the optimize kwarg here 🌵
This repository provides an einsum (and tensordot) function implemented via batch matrix
multiply.
- This can be much faster than the raw
numpy.einsumfunction, especially for large and high dimensional contractions. - It can also be used to enable
einsumfor any backend that provides onlytranpose,reshapeandmatmul.
The implementation is achieved by grouping indices according to the following classification:
- Summed indices are trivially removed.
- A and B and then transposed and reshaped for batched matrix multiplication
- The output is reshaped and transposed
Each of these steps only occurs if necessary. There are slight specializations for both pure multiplication and no batch indices.
Notes:
- It currently only supports 1 or 2 terms, a library such as
opt_einsumorcotengrashould be used to dispatch many term contractions to a pairwise ordering in conjuction with thiseinsum_bmm.
