-
Couldn't load subscription status.
- Fork 131
Switch VecOfArrays for RecursiveArrayTools's VectorOfArray
#2491
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Review checklistThis checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging. Purpose and scope
Code quality
Documentation
Testing
Performance
Verification
Created with ❤️ by the Trixi.jl community. |
| mpi_neighbor_interfaces::VectorOfArray{Int, 2, Vector{Vector{Int}}} | ||
| mpi_neighbor_mortars::VectorOfArray{Int, 2, Vector{Vector{Int}}} | ||
| mpi_send_buffers::VectorOfArray{uEltype, 2, Vector{Vector{uEltype}}} | ||
| mpi_recv_buffers::VectorOfArray{uEltype, 2, Vector{Vector{uEltype}}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems counter what I expect. For GPU support the send and receive buffers likely need to be CuArrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't they get Adapt.jled as well?
julia> mpicache = Trixi.P4estMPICache(Float32)
julia> typeof(mpicache.mpi_send_buffers)
RecursiveArrayTools.VectorOfArray{Float32, 2, Vector{Vector{Float32}}}
julia> mpicache.mpi_send_buffers = VectorOfArray(Vector{Vector{Float32}}(undef, 10))
VectorOfArray{Float32,2}:
10-element Vector{Vector{Float32}}:
#undef
#undef
#undef
#undef
#undef
#undef
#undef
#undef
#undef
#undef
julia> for index in 1:length(mpicache.mpi_send_buffers)
mpicache.mpi_send_buffers.u[index] = Vector{Float32}(undef, 10)
end
julia> Adapt.adapt_structure(CuArray, mpicache.mpi_send_buffers)
VectorOfArray{Float32,2}:
10-element Vector{CuArray{Float32, 1, CUDA.DeviceMemory}}:There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likely! Are we missing a rule?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is now doing what we expect:
julia> using Trixi, CUDA
julia> trixi_include("../examples/p4est_3d_dgsem/elixir_advection_basic.jl")
julia> semi_adapted = Trixi.trixi_adapt(CuArray, Float32, semi)
julia> typeof(semi.cache.fstar_primary_threaded)
RecursiveArrayTools.VectorOfArray{Float64, 5, Vector{Array{Float64, 4}}}
julia> typeof(semi_adapted.cache.fstar_primary_threaded)
RecursiveArrayTools.VectorOfArray{Float32, 5, Vector{CuArray{Float32, 4, CUDA.DeviceMemory}}}Should we try to merge @vchuravy ?
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2491 +/- ##
==========================================
+ Coverage 96.68% 96.70% +0.02%
==========================================
Files 512 511 -1
Lines 42302 42284 -18
==========================================
- Hits 40899 40890 -9
+ Misses 1403 1394 -9
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Recent updates made RecursiveArrayTools's VectorOfArray do the right thing when used with Adapt.jl. Hence we can avoid using our own VecOfArrays.