Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ITensorNetworks"
uuid = "2919e153-833c-4bdc-8836-1ea460a35fc7"
authors = ["Matthew Fishman <[email protected]>, Joseph Tindall <[email protected]> and contributors"]
version = "0.13.9"
version = "0.13.10"

[deps]
AbstractTrees = "1520ce14-60c1-5f80-bbc7-55ef81b5835c"
Expand Down
3 changes: 2 additions & 1 deletion src/formnetworks/bilinearformnetwork.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
using ITensors: ITensor, Op, prime, sim
using ITensors.NDTensors: denseblocks

default_dual_site_index_map = prime
default_dual_link_index_map = sim
Expand Down Expand Up @@ -54,7 +55,7 @@ end

function itensor_identity_map(i_pairs::Vector)
return prod(i_pairs; init=ITensor(one(Bool))) do i_pair
return delta(Bool, dag(first(i_pair)), last(i_pair))
return denseblocks(delta(last(i_pair), dag(first(i_pair))))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually I merged prematurely. I think this should be:

Suggested change
return denseblocks(delta(last(i_pair), dag(first(i_pair))))
return denseblocks(delta(Bool, last(i_pair), dag(first(i_pair))))

Otherwise, delta will make a tensor with element type Float64, which will then promote other tensors to double precision, which is an issue on GPU where it is generally better to use single precision. @JoeyT1994 can you make a new PR adding back Bool, or did you see some issue with that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mtfishman This introduces bugs to me that I don't understand. For instance

    g = named_grid((4,))
    s1, s2 = siteinds("S=1/2", g), siteinds("S=1/2", g)

    sind1, sind2 = only(s1[(1,)]), only(s2[(1,)])
    ITensorNetworks.itensor_identity_map([sind1=>prime(sind1), sind2=>prime(sind2)])

returns
image

Whilst

    #More complex constructor 
    g = named_grid((4,))
    s1, s2 = siteinds("S=1/2", g; conserve_qns =true), siteinds("S=1/2", g; conserve_qns =true)

    sind1, sind2 = only(s1[(1,)]), only(s2[(1,)])
    ITensorNetworks.itensor_identity_map([sind1=>prime(sind1), sind2=>prime(sind2)])
    

returns
image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's a way to avoid these bugs and avoid making it Float64 type I can make that fix in an new PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, interesting. An alternative design could be:

function itensor_identity_map(elt::Type, i_pairs::Vector)
  return prod(i_pairs; init=ITensor(one(elt))) do i_pair
    return denseblocks(delta(elt, last(i_pair), dag(first(i_pair))))
  end
end

itensor_identity_map(i_pairs::Vector) = itensor_identity_map(Float64, i_pairs)

Then we can set the element type as needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Though it would be nice to fix that bug anyway.)

end
end

Expand Down
Loading