Skip to content

error in InfiniteMPS with CUDA backend #301

@shencebebetterme

Description

@shencebebetterme

I'd like to run the VUMPS algorithm on GPU, so I tried to construct InfiniteMPS and InfiniteMPO with CUDA.

The following is the code for a naive replacement of Array by CuArray:

using CUDA
using TensorOperations
using TensorKit, MPSKit
using cuTENSOR

B = CuArray(randn(ComplexF64, 5, 5, 5, 5))
Btm = TensorMap(B, ℂ^5⊗ℂ^5 ← ℂ^5⊗ℂ^5)
InfiniteMPO([Btm])

This InfiniteMPO is successful

single site InfiniteMPO{TensorMap{ComplexF64, ComplexSpace, 2, 2, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}:
╷  ⋮
┼ O[1]: TensorMap((ℂ^5 ⊗ ℂ^5) ← (ℂ^5 ⊗ ℂ^5))
╵  ⋮

However, when I do the same for InfiniteMPS like

A = CuArray(randn(ComplexF64, 5, 2, 5))
Atm = TensorMap(A, ℂ^5 ⊗ ℂ^2 ← ℂ^5)
state = InfiniteMPS([Atm])

it gives the following error. How to fix this?

Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.

Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:151
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:124
  [4] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:112
  [5] getindex
    @ ~/.julia/packages/GPUArrays/uiVyU/src/host/indexing.jl:50 [inlined]
  [6] iterate
    @ ./abstractarray.jl:1209 [inlined]
  [7] iterate
    @ ./abstractarray.jl:1207 [inlined]
  [8] generic_norm2(x::CuArray{ComplexF64, 2, CUDA.DeviceMemory})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.11.3+0.x64.linux.gnu/share/julia/stdlib/v1.11/LinearAlgebra/src/generic.jl:471
  [9] norm2
    @ ~/.julia/juliaup/julia-1.11.3+0.x64.linux.gnu/share/julia/stdlib/v1.11/LinearAlgebra/src/generic.jl:535 [inlined]
 [10] #181
    @ ~/.julia/packages/TensorKit/hkxhv/src/tensors/linalg.jl:271 [inlined]
 [11] MappingRF
    @ ./reduce.jl:100 [inlined]
 [12] _foldl_impl(op::Base.MappingRF{TensorKit.var"#181#185"{Float64}, Base.BottomRF{typeof(+)}}, init::Float64, itr::TensorKit.BlockIterator{TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}, TensorKit.SortedVectorDict{Trivial, Tuple{Tuple{Int64, Int64}, UnitRange{Int64}}}})
    @ Base ./reduce.jl:58
 [13] foldl_impl
    @ ./reduce.jl:48 [inlined]
 [14] mapfoldl_impl
    @ ./reduce.jl:44 [inlined]
 [15] mapfoldl
    @ ./reduce.jl:175 [inlined]
 [16] mapreduce
    @ ./reduce.jl:307 [inlined]
 [17] _norm(blockiter::TensorKit.BlockIterator{TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}, TensorKit.SortedVectorDict{Trivial, Tuple{Tuple{Int64, Int64}, UnitRange{Int64}}}}, p::Int64, init::Float64)
    @ TensorKit ~/.julia/packages/TensorKit/hkxhv/src/tensors/linalg.jl:270
 [18] norm
    @ ~/.julia/packages/TensorKit/hkxhv/src/tensors/linalg.jl:262 [inlined]
 [19] normalize! (repeats 2 times)
    @ ~/.julia/packages/TensorKit/hkxhv/src/tensors/linalg.jl:18 [inlined]
 [20] uniform_leftorth!(::Tuple{PeriodicVector{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}, PeriodicVector{TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}}, A::PeriodicVector{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}, C₀::TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}, alg::MPSKit.LeftCanonical)
    @ MPSKit ~/.julia/packages/MPSKit/XpTWn/src/states/ortho.jl:190
 [21] gaugefix!
    @ ~/.julia/packages/MPSKit/XpTWn/src/states/ortho.jl:135 [inlined]
 [22] gaugefix!(ψ::InfiniteMPS{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}, TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}, A::PeriodicVector{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}, C₀::TensorMap{ComplexF64, ComplexSpace, 1, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}, alg::MPSKit.MixedCanonical)
    @ MPSKit ~/.julia/packages/MPSKit/XpTWn/src/states/ortho.jl:124
 [23] #gaugefix!#90
    @ ~/.julia/packages/MPSKit/XpTWn/src/states/ortho.jl:118 [inlined]
 [24] gaugefix!
    @ ~/.julia/packages/MPSKit/XpTWn/src/states/ortho.jl:107 [inlined]
 [25] InfiniteMPS(A::Vector{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}}; kwargs::@Kwargs{})
    @ MPSKit ~/.julia/packages/MPSKit/XpTWn/src/states/infinitemps.jl:170
 [26] InfiniteMPS(A::Vector{TensorMap{ComplexF64, ComplexSpace, 2, 1, CuArray{ComplexF64, 1, CUDA.DeviceMemory}}})
    @ MPSKit ~/.julia/packages/MPSKit/XpTWn/src/states/infinitemps.jl:139
 [27] top-level scope
    @ ~/dev/TensorNetworkNew/GPU/jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_X24sdnNjb2RlLXJlbW90ZQ==.jl:4

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions