Skip to content

mpirun with OpenCL #12813

@TheFloatingBrain

Description

@TheFloatingBrain

Background information

I am trying to run MEEP on my GPU, I have both an Nvidia Card and an integrated Radeon card. Meep has the feature Parallel Meep, where scripts written using meep are launched through mpirun. I have successfully done this on the CPU. However I would really like to speed things up using the GPU, and I would like to avoid using the proprietary nvidia driver for now on on linux, its high maintenance and taints my kernel. I would like to try and run meep on my Nvidia GPU using the open source Mesa OpenCL driver instead. I saw OpenMPI does seem to support OpenCL [0], [1], [2], [3] I would like to avoid edits to actual meep code, Parallel Meep seems to be chunked, which from what I read is a necessity to run the code on the GPU?

It does seem possible to interface with the gpu through mpirun, conda gave me this message

On Linux, Open MPI is built with UCX support but it is disabled by default.                                                                                                                     
To enable it, first install UCX (conda install -c conda-forge ucx).                                                                                                                             
Afterwards, set the environment variables                                                                                                                                                       
OMPI_MCA_pml=ucx OMPI_MCA_osc=ucx
before launching your MPI processes.
Equivalently, you can set the MCA parameters in the command line:
mpiexec --mca pml ucx --mca osc ucx ...


On Linux, Open MPI is built with CUDA awareness but it is disabled by default.
To enable it, please set the environment variable
OMPI_MCA_opal_cuda_support=true
before launching your MPI processes.
Equivalently, you can set the MCA parameter in the command line:
mpiexec --mca opal_cuda_support 1 ...
Note that you might also need to set UCX_MEMTYPE_CACHE=n for CUDA awareness via
UCX. Please consult UCX documentation for further details.

What version of Open MPI are you using? (e.g., v4.1.6, v5.0.1, git branch name and hash, etc.)

Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)

Using dnf on fedora... conda? (See above)

Please describe the system on which you are running

  • Operating system/version: Fedora 40
  • Computer hardware: Laptop with integrated AMD Radeon (Ryzen 7 Series APU), and Nvidia RTX 3000 Series GPU
  • Network type: Local

Details of the problem

Im sorry this is a bit of a noob question. I described the background information above, I simply am having difficulty figuring out how to use mpirun with opencl. Does it have such an interface? Do I need to recompile meep? If so what should I do specifically? Is what I am trying to do possible?

P.s getting both GPU's and the CPU in the game would be great as well, but that might be a separate question

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions