Skip to content
This repository was archived by the owner on Sep 30, 2022. It is now read-only.

Commit 87a79f5

Browse files
authored
Merge pull request #1126 from hppritcha/topic/readme_multi_threaded
README: update MPI_THREAD_MULTIPLE support
2 parents 440f73f + 5a43a78 commit 87a79f5

File tree

1 file changed

+15
-38
lines changed

1 file changed

+15
-38
lines changed

README

Lines changed: 15 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Much, much more information is also available in the Open MPI FAQ:
5959
===========================================================================
6060

6161
The following abbreviated list of release notes applies to this code
62-
base as of this writing (April 2015):
62+
base as of this writing (June 2016):
6363

6464
General notes
6565
-------------
@@ -85,11 +85,6 @@ General notes
8585
experience growing pains typical of any new software package.
8686
End-user feedback is greatly appreciated.
8787

88-
This implementation will currently most likely provide optimal
89-
performance on Mellanox hardware and software stacks. Overall
90-
performance is expected to improve as other network vendors and/or
91-
institutions contribute platform specific optimizations.
92-
9388
See below for details on how to enable the OpenSHMEM implementation.
9489

9590
- Open MPI includes support for a wide variety of supplemental
@@ -287,6 +282,9 @@ Compiler Notes
287282
still using GCC 3.x). Contact Pathscale support if you continue to
288283
have problems with Open MPI's C++ bindings.
289284

285+
Note the MPI C++ bindings have been deprecated by the MPI Forum and
286+
may not be supported in future releases.
287+
290288
- Using the Absoft compiler to build the MPI Fortran bindings on Suse
291289
9.3 is known to fail due to a Libtool compatibility issue.
292290

@@ -450,22 +448,20 @@ MPI Functionality and Features
450448
deprecated_example.c:4: warning: 'MPI_Type_struct' is deprecated (declared at /opt/openmpi/include/mpi.h:1522)
451449
shell$
452450

453-
- MPI_THREAD_MULTIPLE support is included, but is only lightly tested.
454-
It likely does not work for thread-intensive applications. Note
455-
that *only* the MPI point-to-point communication functions for the
456-
BTL's listed here are considered thread safe. Other support
457-
functions (e.g., MPI attributes) have not been certified as safe
458-
when simultaneously used by multiple threads.
451+
- MPI_THREAD_MULTIPLE is supported. Note that Open MPI must be
452+
configured with --enable-mpi-thread-multiple to get this
453+
level of thread safety support.
454+
455+
The following BTLs support MPI_THREAD_MULTIPLE:
459456
- tcp
460-
- sm
457+
- openib
458+
- vader (shared memory)
459+
- ugni
461460
- self
462461

463-
Note that Open MPI's thread support is in a fairly early stage; the
464-
above devices may *work*, but the latency is likely to be fairly
465-
high. Specifically, efforts so far have concentrated on
466-
*correctness*, not *performance* (yet).
467-
468-
YMMV.
462+
The following MTLs and PMLs support MPI_THREAD_MULTIPLE:
463+
- MXM
464+
- portals4
469465

470466
- MPI_REAL16 and MPI_COMPLEX32 are only supported on platforms where a
471467
portable C datatype can be found that matches the Fortran type
@@ -659,25 +655,6 @@ Network Support
659655
Mellanox InfiniBand plugin driver is created. The problem is fixed
660656
OFED v1.1 (and later).
661657

662-
- Better memory management support is available for OFED-based
663-
transports using the "ummunotify" Linux kernel module. OFED memory
664-
managers are necessary for better bandwidth when re-using the same
665-
buffers for large messages (e.g., benchmarks and some applications).
666-
667-
Unfortunately, the ummunotify module was not accepted by the Linux
668-
kernel community (and is still not distributed by OFED). But it
669-
still remains the best memory management solution for MPI
670-
applications that used the OFED network transports. If Open MPI is
671-
able to find the <linux/ummunotify.h> header file, it will build
672-
support for ummunotify and include it by default. If MPI processes
673-
then find the ummunotify kernel module loaded and active, then their
674-
memory managers (which have been shown to be problematic in some
675-
cases) will be disabled and ummunotify will be used. Otherwise, the
676-
same memory managers from prior versions of Open MPI will be used.
677-
The ummunotify Linux kernel module can be downloaded from:
678-
679-
http://lwn.net/Articles/343351/
680-
681658
- The use of fork() with OpenFabrics-based networks (i.e., the openib
682659
BTL) is only partially supported, and only on Linux kernels >=
683660
v2.6.15 with libibverbs v1.1 or later (first released as part of

0 commit comments

Comments
 (0)