Replies: 1 comment
-
How does this differ from the
Don't be sure about that. If your mesh is already replicated, trying to do work remotely and communicate it is often slower than just doing work locally, because in the former case you're on fast memory and in the latter you're on a slow network. If your mesh is already distributed then doing some of the work in parallel is asymptotically faster, but it's also inherently necessary.
Then in your code above, process 0 will then create element id 0 for i=0, and then for i=1 it will try to create the exact same element id that process 1 is trying to create for i=0, and so on from there. You ought to be running this new code in |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently working on a function to get the surface or "skin" from a libmesh::Mesh object, from either the whole mesh or any arbitrary set of elements from within a mesh. This is working fine in serial but for bigger meshes it would be nice to do some of the work in parallel. Say I am building a mesh of 100 elements, creating 50 elements on process_id 0 and 50 on process_id 1 would be nice. I am currently building all the nodes on all processors, so I'm currently just focusing on building the elements in parallel. I am using a loop like the one shown below.
The connectivity for all the elements to be built locally is stored in the
connectivity
vector. TheglobalElemInd
is an integer offset used to make sure that two elements don't have the same ID. It is based on the processor ID, so in the previous example of creating a mesh with 100 elems, process 0 and 1 will set elems with ID's of 0-49 and 50-99 respectively. Therefore on process 0globalElemInd = 0
, and on process 1globalElemInd = 1
My question is, is this approach even valid/ reasonable/ sane? In serial I can build the mesh just fine, but it when I try to run in parallel the only elements that are output are those built on process 0. I have tried using
update_parallel_id_counts()
, but parallel_max_elem_id() still only picks up the elements made on process 0. I have tried using set_distributed() and redistribute() to see if that makes any difference but to no success. Maybe this approach is just fundamentally wrong, or maybe I need to run some kind of sync method like those that exist inlibMesh::BoundaryMesh
. I'm just a bit stuck at the moment so any input or help would be really appreciated:) Thanks!Beta Was this translation helpful? Give feedback.
All reactions