-
Notifications
You must be signed in to change notification settings - Fork 128
Open
Labels
NDTensorsRequires changes to the NDTensors.jl library.Requires changes to the NDTensors.jl library.enhancementNew feature or requestNew feature or request
Milestone
Description
Here is a roadmap to removing TensorStorage types (EmptyStorage, Dense, Diag, BlockSparse, DiagBlockSparse, Combiner) in favor of more traditional AbstractArray types (UnallocatedZeros, Array, DiagonalArray, BlockSparseArray, CombinerArray), as well as removing Tensor in favor of NamedDimsArray.
NDTensors reorganization
Followup to BlockSparseArrays rewrite in #1272:
- Move some functionality to
SparseArrayInterface, such asTensorAlgebra.contract. - Clean up tensor algebra code in
BlockSparseArray, making use of broadcasting and mapping functionality defined inSparseArrayInterface.
Followup to SparseArrayInterface/SparseArrayDOKs defined in #1270:
-
TensorAlgebraoverloads forSparseArrayInterface/SparseArrayDOK, such ascontract. - Use
SparseArrayDOKas a backend forBlockSparseArray(maybe call itBlockSparseArrayDOK?). - Consider making a
BlockSparseArrayInterfacepackage to define an interface and generic functionality for block sparse arrays, analogous toSparseArrayInterface(EDIT: Currently lives insideBlockSparseArraylibrary.)
Followup to the reorganization started in #1268:
- Move low rank
qr,eigen,svddefinitions toNDTensors.RankFactorizationmodule. Currently they are defined inNamedDimsArrays.NamedDimsArraysTensorAlgebraExt, those should be wrappers around the ones inNDTensors.RankFactorization. - Split off the
SparseArraytype into anNDTensors.SparseArraysmodule (maybe come up with a different name likeNDSparseArrays,GenericSparseArrays,AbstractSparseArrays, etc.). Currently it is inNDTensors.BlockSparseArrays. Also rename itSparseArrayDOK(for dictionary-of-keys) to distinguish it from other formats. - Clean up
NDTensors/src/TensorAlgebra/src/fusedims.jl. - Remove
NDTensors.TensorAlgebra.BipartitionedPermutation, figure out how to disambiguate between partitioned permutation and named dimension interface. How much dimension name logic should go inNDTensors.TensorAlgebravs.NDTensors.NamedDimsArrays? - Create
NDTensors.CombinerArraysmodule. MoveCombinerandCombinerArraytype definitions there. - Create
NDTensors.CombinerArrays.CombinerArraysTensorAlgebraExtextension. MoveCombinercontractdefinition fromITensorsNamedDimsArraysExt/src/combiner.jltoCombinerArraysTensorAlgebraExt(which is just a simple wrapper aroundTensorAlgebra.fusedimsandTensorAlgebra.splitdims). - Dispatch ITensors.jl definitions for
qr,eigen,svd,factorize,nullspace, etc. ontypeof(tensor(::ITensor))so then forITensorwrappingNamedDimsArraywe can fully rewrite those functions usingNamedDimsArraysandTensorAlgebrawhere the matricization logic can be handled more elegantly withfusedims. - Get all the same functionality working for
ITensorwrapping aNamedDimsArraywrapping aBlockSparseArray. - Make sure all
NamedDimsArrays-based code works on GPU. - Make
Indexa subtype ofAbstractNamedInt(or maybeAbstractNamedUnitRange?). - Make
ITensora subtype ofAbstractNamedDimsArray. - Deprecate from
NDTensors.RankFactorization:Spectrum,eigs,entropy,truncerror. - Decide if
sizeandaxesofAbstractNamedDimsArray(including theITensortype) should output named sizes and ranges. - Define an
ImmutableArrayssubmodule and have theITensortype default to wrappingImmutableArraydata, with copy-on-write semantics. Also come up with an abstraction for arrays that can manage their own memory, such asAbstractCOWArray(for copy-on-write) orAbstractMemoryManagedArray, as well asNamedDimsArrayversions, and makeITensora subtype ofAbstractMemoryManagedNamedDimsArrayor something like that (perhaps a good use case for anisnamedtrait to opt-in to automatic permutation semantics for indexing, contraction, etc.). - Use StaticPermutations.jl for dimension permutation logic in
TensorAlgebraandNamedDimsArrays.
Testing
- Unit tests for
ITensors.ITensorsNamedDimsArraysExt. - Run
ITensorsNamedDimsArraysExtexamples in tests. - Unit tests for
NDTensors.RankFactorizationmodule. - Unit tests for
NamedDimsArrays.NamedDimsArraysTensorAlgebraExt:fusedims,qr,eigen,svd. - Unit tests for
NDTensors.CombinerArraysandNDTensors.CombinerArrays.CombinerArraysTensorAlgebraExt.
EmptyStorage
- Define
UnallocatedZeros(in progress in [NDTensors]UnallocatedArraysandUnspecifiedTypes#1213). - Use
UnallocatedZerosas default data type instead ofEmptyStoragein ITensor constructors.
Diag
- Define
DiagonalArray. - Tensor contraction, addition, QR, eigendecomposition, SVD.
- Use
DiagonalArrayas default data type instead ofDiagin ITensor constructors.
UniformDiag
- Replace with
DiagonalArraywrapping anUnallocatedZerostype.
BlockSparse
- Define
BlockSparseArray. - Tensor contraction, addition, QR, eigendecomposition, SVD.
- Use
BlockSparseArrayas default data type instead ofBlockSparsein ITensor QN constructors.
DiagBlockSparse
- Use
BlockSparseArraywith blocks storingDiagonalArray, make sure all tensor operations work. - Replace
DiagBlockSparsein ITensor QN constructors.
Combiner
- Not sure what to do with this, but a lot of functionality will be replaced by the new
fusedims/matricizefunctionality inTensorAlgebra/BlockSparseArraysand also by the newFusionTensortype. Likely will be superseded byCombinerArray,FusionTree, or something like that.
Simplify ITensor and Tensor constructors
- Make ITensor constructors more uniform by using a style
tensor(storage::AbstractArray, inds::Tuple), avoid constructors likeDenseTensor,DiagTensor,BlockSparseTensor, etc. - Use
rand(i, j, k),randn(i, j, k),zeros(i, j, k),fill(1.2, i, j, k),diagonal(i, j, k), etc. instead ofrandomITensor(i, j, k),ITensor(i, j, k),ITensor(1.2, i, j, k),diagITensor(i, j, k). Maybe make lazy/unallocated by default where appropriate, i.e. useUnallocatedZerosforzerosandUnallocatedFillforfill. - Consider
randn(2, 2)(i, j)as a shorthand for creating an ITensor with indices(i, j)wrapping an array. Also could usesetinds(randn(2, 2), i, j). - Remove automatic conversion to floating point in ITensor constructor.
Define TensorAlgebra submodule
-
TensorAlgebrasubmodule which definescontract[!][!],mul[!][!],add[!][!],permutedims[!][!],fusedims/matricize,contract(::Algorithm"matricize", ...), truncated QR, eigendecomposition, SVD, etc. with generic fallback implementations forAbstractArrayand maybe some specialized implementations forArray.(Started in [NDTensors] StartTensorAlgebramodule #1265, [TensorAlgebra] Matricized QR tensor decomposition #1266.) - Use ErrorTypes.jl for catching errors and calling fallbacks in failed matrix decompositions.
- Move most matrix factorization logic from ITensors.jl into
TensorAlgebra.
New Tensor semantics
- Make
Tensorfully into a wrapper array type with named dimensions, with similar "smart indices" for contraction and addition like theITensortype has right now. Rename toNamedDimsArray. (Started in [NDTensors]NamedDimsArraysmodule #1267.) - Use
struct NamedAxis{Axis,Name} axis::Axis; name::Name; endas a more generic version ofIndex, whereIndexhas anamethat store the ID, tags, and prime level. (Started in [NDTensors]NamedDimsArraysmodule #1267.) - Replace
ITensors.valfor named indexing with dictionaries attached to dimensions/axes, like in AxisKeys.jl, DimensionalData.jl, NamedArrays.jl, etc.
emstoudenmire and kmp5VT
Metadata
Metadata
Assignees
Labels
NDTensorsRequires changes to the NDTensors.jl library.Requires changes to the NDTensors.jl library.enhancementNew feature or requestNew feature or request