-
Notifications
You must be signed in to change notification settings - Fork 56
Vectorize fusiontree manipulations #261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I think a first general design remark is that now all the manipulations on the trees_src = fusiontrees(fs_src)
trees_dst = fusiontrees(fs_dst)
indexmap = Dict(f => ind for (ind, f) in enumerate(trees_dst))If you know start composing elementary operations to make more complex ones (e.g. It might thus be a better idea to store the blocksectors and associated fusion tree pairs (and there index mapping) within the Of course, this way, the Happy to discuss when you have time. |
4e40fc3 to
50db028
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #261 +/- ##
==========================================
- Coverage 82.85% 82.73% -0.13%
==========================================
Files 44 45 +1
Lines 5757 6139 +382
==========================================
+ Hits 4770 5079 +309
- Misses 987 1060 +73 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
So I compared the performances with TensorKit v0.14.9 |
1f4c68f to
ecbdaa6
Compare
|
As a small update here, I think I further optimized the implementations slightly, however there definitely is still some room for improvement. In particular the I'll try and run some profilers next week to see what is now actually the bottleneck, to verify what needs further improvements. In particular, I can try and reduce some allocations by trying to re-use a bunch of the unitary matrices, or we can try and cache the |
|
BTW does @ogauthe have a MWE of what's causing the slowdown that could be shared, perhaps to run as part of a perf regression testsuite 👀 ? |
using TensorKit
using TensorKit: treepermuter
Vsrc = (Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)) ← (Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)')
Vdst = (Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)') ← (Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)' ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1) ⊗ Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)')
p = ((5, 6), (1, 7, 2, 8, 3, 9, 4, 10))
TensorKit.empty_globalcaches!()
@time treepermuter(Vdst, Vsrc, p);
Profile.clear()
TensorKit.empty_globalcaches!()
Profile.init(n = 10^7, delay = 0.01)
@profile treepermuter(Vdst, Vsrc, p);Here's a relevant case I took from the code at some point. It's really just a matter of taking one of the peps contractions and looking for the dominant treepermuter cost, which you can typically identify by adding |
|
@kshyatt here is a slightly larger example. @lkdvos minimal case should appear at some point. The timing was done using this branch as of August 15th. using TensorOperations: @tensor
using TensorKit
function proj_braket(rD)
fs = map(s -> sign(frobeniusschur(s)), sectors(rD))
!(all(fs .== 1) || all(fs .== -1)) && return error("Cannot handle quaternionic irreps")
rD2 = fuse(rD, rD')
isoD2 = isomorphism(rD2, rD ⊗ rD')
permuted_iso = flip(permute(isoD2', ((3,), (2, 1))), (1,))
proj2_even_fullspace = isoD2 + permuted_iso
proj2_odd_fullspace = isoD2 - permuted_iso
proj2_even = rightnull(proj2_odd_fullspace; alg=SVD(), atol=1e-12)
proj2_odd = rightnull(proj2_even_fullspace; alg=SVD(), atol=1e-12)
return proj2_even, proj2_odd
end
function project_Z2(rd, rD)
tket = randn(rd ← rD ⊗ rD ⊗ rD' ⊗ rD')
pd_even_oddd = proj_braket(rd)
projN = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD))
projE = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD))
projS = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD'))
projW = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD'))
t_double = permute(tket' ⊗ tket, ((5, 6), (1, 7, 2, 8, 3, 9, 4, 10)))
for i_d in 1:2
projectedd = permute(pd_even_oddd[i_d] * t_double, ((1, 4, 5, 6, 7, 8, 9), (2, 3)))
for iN in 1:2
@tensor projectedN[d2, D2N, ketW, braW, ketS, braS; ketE, braE] :=
projectedd[d2, ketE, braE, ketS, braS, ketW, braW; ketN, braN] *
projN[iN][ketN, braN, D2N]
for iE in 1:2
@tensor projectedNE[d2, D2N, D2E, ketW, braW; ketS, braS] :=
projectedN[d2, D2N, ketS, braS, ketW, braW; ketE, braE] *
projE[iE][ketE, braE, D2E]
for iS in 1:2
iW = mod1(i_d + iN + iE + iS + 1, 2)
@tensor projected[d2, D2N, D2E, D2S, D2W] :=
projectedNE[d2, D2N, D2E, ketW, braW; ketS, braS] *
projS[iS][ketS, braS, D2S] *
projW[iW][ketW, braW, D2W]
end
end
end
end
return 1
end
rd = Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)
rD7 = Rep[ℤ₂ × SU₂]((0, 0)=>1, (0, 1)=>1, (1, 1)=>1)
rD11 = Rep[ℤ₂ × SU₂]((0, 0) => 2, (0, 1) => 1, (1, 1) => 2)
rD16 = Rep[ℤ₂ × SU₂]((0, 0) => 2, (0, 1) => 1, (1, 1) => 2, (0, 2) => 1)
@time project_Z2(rd, rD7)
@time project_Z2(rd, rD7)
@time project_Z2(rd, rD11)
@time project_Z2(rd, rD11)
# 1st run D = 7
@time project_Z2(rd, rD7)
# 400.344116 seconds (1.39 G allocations: 465.458 GiB, 7.96% gc time, 24.47% compilation time)
# 2nd run D = 7
@time project_Z2(rd, rD7)
#5.408577 seconds (20.24 M allocations: 2.738 GiB, 5.07% gc time, 5.34% compilation time)
# 1st run D = 11 (*after D = 7*)
@time project_Z2(rd, rD11)
# 319.394331 seconds (1.36 G allocations: 482.086 GiB, 8.63% gc time, 0.00% compilation time)
# 2nd run D = 11
@time project_Z2(rd, rD11)
# 7.989900 seconds (20.06 M allocations: 4.710 GiB, 4.31% gc time) |
|
I have been using this branch for expensive computations with using TensorKit
using SUNRepresentations: SU3Irrep
vd = Vect[SU3Irrep]((1, 0, 0) => 1)
h = TensorMap([0.0, 2.0], vd ⊗ vd ← vd ⊗ vd)
permute(h, (1, 2, 3), (4,))ERROR: LoadError: KeyError: key (FusionTree{Irrep[SU₃]}(((1, 0, 0), (1, 0, 0), (1, 1, 0)), (1, 0, 0), (false, false, true), ((1, 1, 0),), (1, 1)), FusionTree{Irrep[SU₃]}(((1, 0, 0),), (1, 0, 0), (false,), (), ())) not found
Stacktrace:
[1] getindex(h::Dict{Tuple{Tuple{SU3Irrep, SU3Irrep}, Tuple{Int64, Int64}}, Int64}, key::Tuple{FusionTree{SU3Irrep, 3, 1, 2}, FusionTree{SU3Irrep, 1, 0, 0}})
@ Base ./dict.jl:498
[2] bendright(src::TensorKit.FusionTreeBlock{SU3Irrep, 4, 0, Tuple{FusionTree{SU3Irrep, 4, 2, 3}, FusionTree{SU3Irrep, 0, 0, 0}}})
@ TensorKit ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:138
[3] macro expansion
@ ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:397 [inlined]
[4] repartition
@ ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:379 [inlined]
[5] repartition
@ ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:364 [inlined]
[6] __fsbraid(key::Tuple{TensorKit.FusionTreeBlock{SU3Irrep, 2, 2, Tuple{FusionTree{SU3Irrep, 2, 0, 1}, FusionTree{SU3Irrep, 2, 0, 1}}}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}})
@ TensorKit ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:658
[7] #54
@ ~/.julia/packages/TensorKit/Y7XJ1/src/auxiliary/caches.jl:108 [inlined]
[8] get!(default::TensorKit.var"#54#59"{Tuple{TensorKit.FusionTreeBlock{SU3Irrep, 2, 2, Tuple{FusionTree{SU3Irrep, 2, 0, 1}, FusionTree{SU3Irrep, 2, 0, 1}}}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}}, lru::LRUCache.LRU{Any, Any}, key::Tuple{TensorKit.FusionTreeBlock{SU3Irrep, 2, 2, Tuple{FusionTree{SU3Irrep, 2, 0, 1}, FusionTree{SU3Irrep, 2, 0, 1}}}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}})
@ LRUCache ~/.julia/packages/LRUCache/ZH7qB/src/LRUCache.jl:169
[9] _fsbraid(key::Tuple{TensorKit.FusionTreeBlock{SU3Irrep, 2, 2, Tuple{FusionTree{SU3Irrep, 2, 0, 1}, FusionTree{SU3Irrep, 2, 0, 1}}}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}, ::TensorKit.GlobalLRUCache)
@ TensorKit ./reduce.jl:0
[10] _fsbraid
@ ./none:0 [inlined]
[11] braid
@ ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:618 [inlined]
[12] permute
@ ~/.julia/packages/TensorKit/Y7XJ1/src/fusiontrees/fusiontreeblocks.jl:676 [inlined]
[13] fusiontreetransform
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/treetransformers.jl:228 [inlined]
[14] TensorKit.GenericTreeTransformer(transform::TensorKit.var"#fusiontreetransform#259"{Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}}, p::Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Vdst::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 3, 1}, Vsrc::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2})
@ TensorKit ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/treetransformers.jl:130
[15] TensorKit.TreeTransformer(transform::Function, p::Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, Vdst::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 3, 1}, Vsrc::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2})
@ TensorKit ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/treetransformers.jl:199
[16] _treepermuter
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/treetransformers.jl:229 [inlined]
[17] #257
@ ~/.julia/packages/TensorKit/Y7XJ1/src/auxiliary/caches.jl:108 [inlined]
[18] get!(default::TensorKit.var"#257#262"{TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 3, 1}, TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}}, lru::LRUCache.LRU{Any, Any}, key::Tuple{TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 3, 1}, TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2}, Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}})
@ LRUCache ~/.julia/packages/LRUCache/ZH7qB/src/LRUCache.jl:169
[19] treepermuter(Vdst::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 3, 1}, Vsrc::TensorMapSpace{GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2}, p::Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}, ::TensorKit.GlobalLRUCache)
@ TensorKit ./reduce.jl:0
[20] treepermuter
@ ./none:0 [inlined]
[21] treepermuter
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/treetransformers.jl:224 [inlined]
[22] add_permute!
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/indexmanipulations.jl:405 [inlined]
[23] permute!
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/indexmanipulations.jl:38 [inlined]
[24] permute(t::TensorMap{Float64, GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2, Vector{Float64}}, ::Tuple{Tuple{Int64, Int64, Int64}, Tuple{Int64}}; copy::Bool)
@ TensorKit ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/indexmanipulations.jl:77
[25] permute
@ ~/.julia/packages/TensorKit/Y7XJ1/src/tensors/indexmanipulations.jl:65 [inlined]
[26] permute(t::TensorMap{Float64, GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2, Vector{Float64}}, p1::Tuple{Int64, Int64, Int64}, p2::Tuple{Int64}; copy::Bool)
@ TensorKit ./deprecated.jl:105
[27] permute(t::TensorMap{Float64, GradedSpace{SU3Irrep, TensorKit.SortedVectorDict{SU3Irrep, Int64}}, 2, 2, Vector{Float64}}, p1::Tuple{Int64, Int64, Int64}, p2::Tuple{Int64})
@ TensorKit ./deprecated.jl:103
[28] top-level scope
@ ~/Documents/tensorkit/ThermalPEPS.jl/nogit.draft.jl:9
in expression starting at /home/ogauthe/Documents/tensorkit/ThermalPEPS.jl/nogit.draft.jl:9There is no error with the same code with TensorKit release |
|
Ah, these great self-dual sectors... Thanks for the report and MWE, I'll see if I can find out something although it might not be for this week |
8507405 to
fc4deeb
Compare
|
I think at least now this should be working, and it is back in line with the latest main branch so all of the latest and greatest new features should be usable now :) |
| Base.convert(A::Type{<:AbstractArray}, f::FusionTree) = convert(A, fusiontensor(f)) | ||
| # TODO: is this piracy? | ||
| Base.convert(A::Type{<:AbstractArray}, (f₁, f₂)::FusionTreePair) = | ||
| convert(A, fusiontensor((f₁, f₂))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say: yes this is type piracy
Co-authored-by: Olivier Gauthé <[email protected]>
| for f₁ in fusiontrees(uncoupled[1], c, isdual[1]), | ||
| f₂ in fusiontrees(uncoupled[2], c, isdual[2]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes f₂ the fastest changing "index" if I am correct. Is that the order we want the trees to be in? Does it matter?
| trees_src = fusiontrees(fs_src) | ||
| isempty(trees_src) || push!(fblocks, fs_src) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since isempty on FusionBlock was defined:
| trees_src = fusiontrees(fs_src) | |
| isempty(trees_src) || push!(fblocks, fs_src) | |
| isempty(fs_src) || push!(fblocks, fs_src) |
| r = _braiding_factor(f₁, f₂, b.adjoint) | ||
| isnothing(r) || @inbounds for i in axes(data, 1), j in axes(data, 2) | ||
| data[i, j, j, i] = r | ||
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like the || short-circuit construction too much for such a complicated expression
| end | |
| if !isnothing(r) | |
| for @inbounds for i in axes(data, 1), j in axes(data, 2) | |
| data[i, j, j, i] = r | |
| end | |
| end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something went wrong with that suggestion.
| throw(ArgumentError("invalid fusion vertex label $μ")) | ||
| end | ||
| f₀ = FusionTree{I}((f₁.coupled, f₂.coupled), c, (false, false), (), (μ,)) | ||
| f, coeff = first(insertat(f₀, 1, f₁)) # takes fast path, single output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| f, coeff = first(insertat(f₀, 1, f₁)) # takes fast path, single output | |
| f, coeff = only(insertat(f₀, 1, f₁)) # takes fast path, single output |
| end | ||
| f₀ = FusionTree{I}((f₁.coupled, f₂.coupled), c, (false, false), (), (μ,)) | ||
| f, coeff = first(insertat(f₀, 1, f₁)) # takes fast path, single output | ||
| @assert coeff == one(coeff) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if we need to keep this.
| return fusiontreedict(I)(f₁ => Fsymbol(c, c, c, c, c, c)[1, 1, 1, 1]) | ||
| end | ||
|
|
||
| # flip a duality flag of a fusion tree |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flip really needs some comments/doc string to remember the logic. I 'll try to come up with something after rereading the PR that introduced it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I don't think this is a very basic manipulation, as it involves both duality and braiding (twists), so not sure it is really in its place here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I am going to leave this as a TODO and think about it later. I forgot what the original constraints and design considerations were for flip.
| ╭ ⋯ ┴╮ | ╭ ⋯ ╯ | ||
| ╭─┴─╮ | | ╭─┴─╮ | ||
| ╭─┴─╮ | ╰─╯ ╭─┴─╮ | | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a nice drawing.
| uncoupled_dst = ( | ||
| TupleTools.front(src.uncoupled[1]), | ||
| (src.uncoupled[2]..., dual(src.uncoupled[1][end])), | ||
| ) | ||
| isdual_dst = ( | ||
| TupleTools.front(src.isdual[1]), | ||
| (src.isdual[2]..., !(src.isdual[1][end])), | ||
| ) | ||
| I = sectortype(src) | ||
| N₁ = numout(src) | ||
| N₂ = numin(src) | ||
| @assert N₁ > 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| uncoupled_dst = ( | |
| TupleTools.front(src.uncoupled[1]), | |
| (src.uncoupled[2]..., dual(src.uncoupled[1][end])), | |
| ) | |
| isdual_dst = ( | |
| TupleTools.front(src.isdual[1]), | |
| (src.isdual[2]..., !(src.isdual[1][end])), | |
| ) | |
| I = sectortype(src) | |
| N₁ = numout(src) | |
| N₂ = numin(src) | |
| @assert N₁ > 0 | |
| I = sectortype(src) | |
| N₁ = numout(src) | |
| N₂ = numin(src) | |
| @assert N₁ > 0 | |
| uncoupled_dst = ( | |
| TupleTools.front(src.uncoupled[1]), | |
| (src.uncoupled[2]..., dual(src.uncoupled[1][N₁])), | |
| ) | |
| isdual_dst = ( | |
| TupleTools.front(src.isdual[1]), | |
| (src.isdual[2]..., !(src.isdual[1][N₁])), | |
| ) |
| fc = FusionTree((c1, c2), c, (!isduala, false), (), (μ,)) | ||
| fr_coeffs = insertat(fc, 2, f₂) | ||
| for (fl′, coeff1) in insertat(fc, 2, f₁) | ||
| N₁ > 1 && !isone(fl′.innerlines[1]) && continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| N₁ > 1 && !isone(fl′.innerlines[1]) && continue | |
| N₁ > 1 && !isunit(fl′.innerlines[1]) && continue |
What's the reason why foldleft for the fusion tree block can't use the foldright results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think there is a fundamental reason why it couldn't, this more or less gradually came about.
I started these changes for performance reasons (when it wasn't vectorized yet), and found some cases where the bookkeeping that is required for mapping one to the other was not completely insignificant (but also not that bad), so I just changed it.
However, looking at it now, doing it on a full block anyways requires some non-trivial code since you either have to construct all the fusiontrees from scratch again, or swap the pairs and make sure they are correctly sorted, and then make sure the permutation is correctly passed on to the coefficient matrix.
TLDR this was easier in my head and I didn't think too hard to try and reduce "code duplication", because it is also slightly faster
Given the speed-up of the permutations, the bottleneck has now migrated to the actual computation of the treetransformers. (mostly for tensors with many legs again)
@ogauthe has reported cases where the first run takes ~2000seconds, while the second run is ~5seconds.
While I haven't really benchmarked or profiled myself yet (shame on me!), I immediately remembered that we are not really composing the fusion tree manipulations in a very optimal way, where we are effectively manually writing out the matrix multiplications on the one hand, but also the recursive implementation means that we are recomputing a lot of transformations that we would otherwise already have encountered. This never really showed up since these computations are somewhat fast and cached, but for large amount of fusiontrees these loops become prohibitively expensive.
This PR aims to address these issues in a very similar fashion as the previous optimization run: we consider the set of all fusion trees with equal uncoupled charges, and act on these as a block. If my back-of-the-envelope calculations are correct, if there are$\sim N^L$ operations due to the recursive nature, while now this is $\sim L N^3$ .
Nsuch fusion trees and we are doingLsubsteps in the manipulation, we were previously doingThis PR now contains the following changes:
braidhad some inconsistencies in its argument orders,Index2Tupleetc)fusiontensorinstead ofconvert(Array, ::FusionTreePair)to avoid type piracyUniqueFusionfusionstyles to avoid the exponential scaling in number of chained manipulations.TensorMapspecializations, approaching the entire "indexmanipulations" kernels in a more unified way