Skip to content

fixed blow up in memory for jacobian structure#19

Merged
CalebDerrickson merged 1 commit intomasterfrom
memory-jacobian-structure
Feb 20, 2025
Merged

fixed blow up in memory for jacobian structure#19
CalebDerrickson merged 1 commit intomasterfrom
memory-jacobian-structure

Conversation

@CalebDerrickson
Copy link
Collaborator

No description provided.

@CalebDerrickson
Copy link
Collaborator Author

yay.

@CalebDerrickson CalebDerrickson merged commit 7461cbc into master Feb 20, 2025
9 checks passed
@CalebDerrickson CalebDerrickson deleted the memory-jacobian-structure branch February 20, 2025 17:43
@amontoison
Copy link
Member

You broke the GPU support Caleb:

nohup: les entrées sont ignorées
ERROR: LoadError: Scalar indexing is disallowed.
Invocation of setindex! resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:151
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:124
  [4] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/aNaXo/src/GPUArraysCore.jl:112
  [5] setindex!
    @ ~/.julia/packages/GPUArrays/uiVyU/src/host/indexing.jl:58 [inlined]
  [6] copyto!(dest::CuArray{Int64, 1, CUDA.DeviceMemory}, dstart::Int64, src::UnitRange{Int64}, sstart::Int64, n::Int64)
    @ Base ./abstractarray.jl:1128
  [7] jac_structure!(nlp::VecchiaMLE.VecchiaModel{Float64, CuArray{Float64, 1, CUDA.DeviceMemory}, CuArray{Int64, 1, CUDA.DeviceMemory}, CuArray{Float64, 2, CUDA.DeviceMemory}}, jrows::CuArray{Int64, 1, CUDA.DeviceMemory}, jcols::CuArray{Int64, 1, CUDA.DeviceMemory})
    @ VecchiaMLE ~/Argonne/VecchiaMLE.jl/src/models/VecchiaMLE_NLPModel.jl:219
  [8] create_callback(::Type{MadNLP.SparseCallback}, nlp::VecchiaMLE.VecchiaModel{Float64, CuArray{Float64, 1, CUDA.DeviceMemory}, CuArray{Int64, 1, CUDA.DeviceMemory}, CuArray{Float64, 2, CUDA.DeviceMemory}}; fixed_variable_treatment::Type, equality_treatment::Type{MadNLP.RelaxEquality})
    @ MadNLP ~/.julia/packages/MadNLP/6x6Eg/src/nlpmodels.jl:359
  [9] MadNLP.MadNLPSolver(nlp::VecchiaMLE.VecchiaModel{Float64, CuArray{Float64, 1, CUDA.DeviceMemory}, CuArray{Int64, 1, CUDA.DeviceMemory}, CuArray{Float64, 2, CUDA.DeviceMemory}}; kwargs::@Kwargs{print_level::MadNLP.LogLevels})
    @ MadNLP ~/.julia/packages/MadNLP/6x6Eg/src/IPM/IPM.jl:124
 [10] MadNLPSolver
    @ ~/.julia/packages/MadNLP/6x6Eg/src/IPM/IPM.jl:115 [inlined]
 [11] madnlp(model::VecchiaMLE.VecchiaModel{Float64, CuArray{Float64, 1, CUDA.DeviceMemory}, CuArray{Int64, 1, CUDA.DeviceMemory}, CuArray{Float64, 2, CUDA.DeviceMemory}}; kwargs::@Kwargs{print_level::MadNLP.LogLevels})
    @ MadNLP ~/.julia/packages/MadNLP/6x6Eg/src/IPM/solver.jl:10
 [12] macro expansion
    @ ~/Argonne/VecchiaMLE.jl/src/VecchiaMLE_input.jl:70 [inlined]
 [13] macro expansion
    @ ./timing.jl:421 [inlined]
 [14] ExecuteModel!(iVecchiaMLE::VecchiaMLEInput, pres_chol::Matrix{Float64}, diags::VecchiaMLE.Diagnostics)
    @ VecchiaMLE ~/Argonne/VecchiaMLE.jl/src/VecchiaMLE_input.jl:69
 [15] VecchiaMLE_Run_Analysis!(iVecchiaMLE::VecchiaMLEInput, pres_chol::Matrix{Float64}, diagnostics::VecchiaMLE.Diagnostics)
    @ VecchiaMLE ~/Argonne/VecchiaMLE.jl/src/VecchiaMLE_input.jl:30
 [16] VecchiaMLE_Run(iVecchiaMLE::VecchiaMLEInput)
    @ VecchiaMLE ~/Argonne/VecchiaMLE.jl/src/VecchiaMLE_input.jl:21
 [17] top-level scope
    @ ~/Argonne/VecchiaMLE.jl/benchmarks/benchmarks.jl:31
in expression starting at /home/montalex/Argonne/VecchiaMLE.jl/benchmarks/benchmarks.jl:13

@CalebDerrickson
Copy link
Collaborator Author

All the checks passed so I though it was fine @amontoison

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants