Skip to content

Conversation

lkdvos
Copy link
Member

@lkdvos lkdvos commented Jul 31, 2025

Given the speed-up of the permutations, the bottleneck has now migrated to the actual computation of the treetransformers. (mostly for tensors with many legs again)
@ogauthe has reported cases where the first run takes ~2000seconds, while the second run is ~5seconds.

While I haven't really benchmarked or profiled myself yet (shame on me!), I immediately remembered that we are not really composing the fusion tree manipulations in a very optimal way, where we are effectively manually writing out the matrix multiplications on the one hand, but also the recursive implementation means that we are recomputing a lot of transformations that we would otherwise already have encountered. This never really showed up since these computations are somewhat fast and cached, but for large amount of fusiontrees these loops become prohibitively expensive.

This PR aims to address these issues in a very similar fashion as the previous optimization run: we consider the set of all fusion trees with equal uncoupled charges, and act on these as a block. If my back-of-the-envelope calculations are correct, if there are N such fusion trees and we are doing L substeps in the manipulation, we were previously doing $\sim N^L$ operations due to the recursive nature, while now this is $\sim L N^3$.

Before I take the time to properly finish this up, this might be good to have a proper review about some design choices. (including, but not limited to, the names). For example, there are now a bunch of "scalar" fusion tree manipulation methods that are more or less obsolete, in the sense that they more or less only make sense in the UniqueFusion case. We could consider removing them, but that becomes a bit larger of a change.

Additionally, I took the time to canonicalize the calling signature of many of these functions, consistently making use of Index2Tuple etc. I also fixed a TODO where we swapped the order of braid arguments for fusiontrees compared to for tensors. (changed the fusiontree one, since I think this is the least invasive)

Obviously this still requires more tests etc as well.

@@ -199,8 +201,7 @@ function Base.convert(A::Type{<:AbstractArray}, f::FusionTree{I,N}) where {I,N}
end

# TODO: is this piracy?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think this is bordering on type piracy, so we should probably just give this method a dedicated name and then simply call that one.

@Jutho
Copy link
Member

Jutho commented Jul 31, 2025

I think a first general design remark is that now all the manipulations on the OuterTreeIterator (which is not an iterator 😄 ) have in them the lines:

    trees_src = fusiontrees(fs_src)
    trees_dst = fusiontrees(fs_dst)
    indexmap = Dict(f => ind for (ind, f) in enumerate(trees_dst))

If you know start composing elementary operations to make more complex ones (e.g. bendleft/right to implement repartition), there will be at least two fusiontrees calls on the same OuterTreeIterator object. In every step the previous destination is the new source for the following operation in the composition list. The fusiontrees method goes through blocksectors and the fusiontrees(uncoupled_sectors, blocksector), which should be pretty efficient, but nonetheless have some overhead.

It might thus be a better idea to store the blocksectors and associated fusion tree pairs (and there index mapping) within the OuterTreeIterator object. The initial data can be extracted from the data the tensor has, and then throughout the sequence of different operations, the old destination becoming new source would be able to reuse the same information.

Of course, this way, the OuterTreeIterator would become a lightweight version of FusionBlockStructure, so maybe there is some way to recycle some functionality or even redesing FusionBlockStructure.

Happy to discuss when you have time.

@lkdvos lkdvos force-pushed the ld-outer branch 8 times, most recently from 4e40fc3 to 50db028 Compare August 1, 2025 11:47
Copy link

codecov bot commented Aug 1, 2025

Codecov Report

❌ Patch coverage is 83.50731% with 79 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.73%. Comparing base (cc18e95) to head (54b7abc).

Files with missing lines Patch % Lines
src/fusiontrees/fusiontreeblocks.jl 84.18% 59 Missing ⚠️
src/tensors/treetransformers.jl 66.66% 12 Missing ⚠️
src/fusiontrees/manipulations.jl 91.52% 5 Missing ⚠️
src/fusiontrees/fusiontrees.jl 33.33% 2 Missing ⚠️
src/tensors/braidingtensor.jl 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #261      +/-   ##
==========================================
- Coverage   82.85%   82.73%   -0.13%     
==========================================
  Files          44       45       +1     
  Lines        5757     6139     +382     
==========================================
+ Hits         4770     5079     +309     
- Misses        987     1060      +73     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ogauthe
Copy link
Contributor

ogauthe commented Aug 1, 2025

So I compared the performances with master. I make some permutations on a bilayer PEPS tensor with size D^8, with V_D =Rep[SU₂](0=>2, 1=>3, 2=>1). I first compile the code with a fast pass at D=3, so compiling time is negligible.

TensorKit v0.14.9
1992.983477 seconds (15.41 G allocations: 2.188 TiB, 10.51% gc time, 1.87% compilation time)
ld-outer
46.581346 seconds (207.48 M allocations: 51.391 GiB, 12.97% gc time, 0.93% compilation time)
which is a factor 43 speed-up. As expected the 2nd run takes the same time in both cases (~3.5 sec). Very impressive @lkdvos !

@lkdvos
Copy link
Member Author

lkdvos commented Aug 15, 2025

As a small update here, I think I further optimized the implementations slightly, however there definitely is still some room for improvement. In particular the insertat functions are still using a recursive approach, which I think still scales badly in the number of steps, therefore hurting the performance of foldright by quite a bit, depending on the number of legs.

I'll try and run some profilers next week to see what is now actually the bottleneck, to verify what needs further improvements. In particular, I can try and reduce some allocations by trying to re-use a bunch of the unitary matrices, or we can try and cache the foldright unitaries directly if these turn out to be the limiting factor. Finally, we should probably try and split the computations of the unitaries of productsectors into their parts, which should avoid us from doubly computing some of that. (although this might actually be non-trivial in combination with the caching and the multithreading...)

@kshyatt
Copy link
Member

kshyatt commented Aug 21, 2025

BTW does @ogauthe have a MWE of what's causing the slowdown that could be shared, perhaps to run as part of a perf regression testsuite 👀 ?

@lkdvos
Copy link
Member Author

lkdvos commented Aug 21, 2025

using TensorKit
using TensorKit: treepermuter
Vsrc = (Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1))  (Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)')
Vdst = (Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)')  (Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)'  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)  Rep[ℤ₂ × SU₂]((0, 0) => 1, (0, 1) => 1, (1, 1) => 1)')
p = ((5, 6), (1, 7, 2, 8, 3, 9, 4, 10))

TensorKit.empty_globalcaches!()
@time treepermuter(Vdst, Vsrc, p);
Profile.clear()
TensorKit.empty_globalcaches!()
Profile.init(n = 10^7, delay = 0.01)
@profile treepermuter(Vdst, Vsrc, p);

Here's a relevant case I took from the code at some point. It's really just a matter of taking one of the peps contractions and looking for the dominant treepermuter cost, which you can typically identify by adding ENV["JULIA_DEBUG"] = TensorKit

@ogauthe
Copy link
Contributor

ogauthe commented Sep 2, 2025

@kshyatt here is a slightly larger example. @lkdvos should appear at one point. The timing was done using this branch as of August 15th.

using TensorOperations: @tensor
using TensorKit

function proj_braket(rD)
    fs = map(s -> sign(frobeniusschur(s)), sectors(rD))
    !(all(fs .== 1) || all(fs .== -1)) && return error("Cannot handle quaternionic irreps")

    rD2 = fuse(rD, rD')
    isoD2 = isomorphism(rD2, rD  rD')
    permuted_iso = flip(permute(isoD2', ((3,), (2, 1))), (1,))
    proj2_even_fullspace = isoD2 + permuted_iso
    proj2_odd_fullspace = isoD2 - permuted_iso

    proj2_even = rightnull(proj2_odd_fullspace; alg=SVD(), atol=1e-12)
    proj2_odd = rightnull(proj2_even_fullspace; alg=SVD(), atol=1e-12)
    return proj2_even, proj2_odd
end

function project_Z2(rd, rD)
    tket = randn(rd  rD  rD  rD'  rD')
    pd_even_oddd = proj_braket(rd)
    projN = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD))
    projE = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD))
    projS = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD'))
    projW = map(t -> permute(t, ((2, 3), (1,))), proj_braket(rD'))

    t_double = permute(tket'  tket, ((5, 6), (1, 7, 2, 8, 3, 9, 4, 10)))

    for i_d in 1:2
        projectedd = permute(pd_even_oddd[i_d] * t_double, ((1, 4, 5, 6, 7, 8, 9), (2, 3)))
        for iN in 1:2
            @tensor projectedN[d2, D2N, ketW, braW, ketS, braS; ketE, braE] :=
                projectedd[d2, ketE, braE, ketS, braS, ketW, braW; ketN, braN] *
                projN[iN][ketN, braN, D2N]
            for iE in 1:2
                @tensor projectedNE[d2, D2N, D2E, ketW, braW; ketS, braS] :=
                    projectedN[d2, D2N, ketS, braS, ketW, braW; ketE, braE] *
                    projE[iE][ketE, braE, D2E]
                for iS in 1:2
                    iW = mod1(i_d + iN + iE + iS + 1, 2)
                    @tensor projected[d2, D2N, D2E, D2S, D2W] :=
                        projectedNE[d2, D2N, D2E, ketW, braW; ketS, braS] *
                        projS[iS][ketS, braS, D2S] *
                        projW[iW][ketW, braW, D2W]
                end
            end
        end
    end
    return 1
end


rd = Rep[ℤ₂ × SU₂]((0, 0) => 1, (1, 1) => 1)
rD7 = Rep[ℤ₂ × SU₂]((0, 0)=>1, (0, 1)=>1, (1, 1)=>1)
rD11 = Rep[ℤ₂ × SU₂]((0, 0) => 2, (0, 1) => 1, (1, 1) => 2)
rD16 = Rep[ℤ₂ × SU₂]((0, 0) => 2, (0, 1) => 1, (1, 1) => 2, (0, 2) => 1)

@time project_Z2(rd, rD7)
@time project_Z2(rd, rD7)
@time project_Z2(rd, rD11)
@time project_Z2(rd, rD11)


# 1st run D = 7
@time project_Z2(rd, rD7)
# 400.344116 seconds (1.39 G allocations: 465.458 GiB, 7.96% gc time, 24.47% compilation time)

# 2nd run D = 7
@time project_Z2(rd, rD7)
#5.408577 seconds (20.24 M allocations: 2.738 GiB, 5.07% gc time, 5.34% compilation time)

# 1st run D = 11  (*after D = 7*)
@time project_Z2(rd, rD11)
# 319.394331 seconds (1.36 G allocations: 482.086 GiB, 8.63% gc time, 0.00% compilation time)

# 2nd run D = 11
@time project_Z2(rd, rD11)
#  7.989900 seconds (20.06 M allocations: 4.710 GiB, 4.31% gc time)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants