Skip to content

permute_ destroys data structure of UniTensor with shared blocks #724

@manuschneider

Description

@manuschneider

If two 'UniTensor's have the same blocks, and I use permute_ on one of them, then the data structure of the other UniTensor is not consistent with its metadata anymore.

Here is an example that looks innocent enough:

uT = cytnx.UniTensor.arange(6).reshape_(2,3).set_name("uT").set_rowrank(1)
print(uT) #shows Tensor with shape (2,3)
uT2 = uT.relabel(["a","b"]).set_name("uT2").permute_([1,0]).set_name("uT2")
uT2[0,1] = 9
uT.print_diagram() #has shape (2,3)
print(uT) #shows Tensor with shape (3,2)!!!
uT2.print_diagram() #has shape (3,2)
print(uT2) #shows Tensor with shape (3,2)

Output:

-------- start of print ---------
Tensor name: uT
Tensor type: Dense
is_diag    : False
contiguous : True

Total elem: 6
type  : Double (Float64)
cytnx device: CPU
Shape : (2,3)
[[0.00000e+00 1.00000e+00 2.00000e+00 ]
 [3.00000e+00 4.00000e+00 5.00000e+00 ]]




-----------------------
tensor Name : uT
tensor Rank : 2
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
          ---------     
         /         \    
   0 ____| 2     3 |____ 1
         \         /
          ---------     
-------- start of print ---------
Tensor name: uT
Tensor type: Dense
is_diag    : False
contiguous : False

Total elem: 6
type  : Double (Float64)
cytnx device: CPU
Shape : (3,2)
[[0.00000e+00 9.00000e+00 ]
 [1.00000e+00 4.00000e+00 ]
 [2.00000e+00 5.00000e+00 ]]




-----------------------
tensor Name : uT2
tensor Rank : 2
block_form  : False
is_diag     : False
on device   : cytnx device: CPU
          ---------     
         /         \    
   b ____| 3     2 |____ a
         \         /
          ---------     
-------- start of print ---------
Tensor name: uT2
Tensor type: Dense
is_diag    : False
contiguous : False

Total elem: 6
type  : Double (Float64)
cytnx device: CPU
Shape : (3,2)
[[0.00000e+00 9.00000e+00 ]
 [1.00000e+00 4.00000e+00 ]
 [2.00000e+00 5.00000e+00 ]]

This leads to errors that are difficult to find.

I suggest that UniTensor.permute_() should call Tensor.permute() internally for each block. The metadata of the Tensor is copied in this case, but the data in memory remains untouched, so the overhead is not large.

Metadata

Metadata

Assignees

No one assigned

    Labels

    convoneed further discussionsuggestionSuggestion for current codebase

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions