@@ -59,7 +59,7 @@ Much, much more information is also available in the Open MPI FAQ:
5959===========================================================================
6060
6161The following abbreviated list of release notes applies to this code
62- base as of this writing (June 2016):
62+ base as of this writing (July 2016):
6363
6464General notes
6565-------------
@@ -448,27 +448,33 @@ MPI Functionality and Features
448448 deprecated_example.c:4: warning: 'MPI_Type_struct' is deprecated (declared at /opt/openmpi/include/mpi.h:1522)
449449 shell$
450450
451- - MPI_THREAD_MULTIPLE is supported with some exceptions. Note that Open MPI
452- must be configured with --enable-mpi-thread-multiple to get this
453- level of thread safety support.
451+ - MPI_THREAD_MULTIPLE is supported with some exceptions. Note that
452+ Open MPI must be configured with --enable-mpi-thread-multiple to get
453+ this level of thread safety support.
454454
455- The following BTLs support MPI_THREAD_MULTIPLE:
456- - tcp
457- - openib
458- - vader (shared memory)
459- - ugni
460- - self
455+ The following PMLs support MPI_THREAD_MULTIPLE:
456+ - cm (see list (1) of supported MTLs, below)
457+ - ob1 (see list (2) of supported BTLs, below)
458+ - ucx
459+ - yalla
461460
462- The following MTLs and PMLs support MPI_THREAD_MULTIPLE:
463- - MXM
464- - portals4
461+ (1) The cm PML and the following MTLs support MPI_THREAD_MULTIPLE:
462+ - MXM
463+ - portals4
465464
466- Currently MPI File operations are not thread safe even if
467- MPI is initialized for MPI_THREAD_MULTIPLE support.
465+ (2) The ob1 PML and the following BTLs support MPI_THREAD_MULTIPLE:
466+ - openib (see exception below)
467+ - self
468+ - tcp
469+ - ugni
470+ - vader (shared memory)
468471
469- The OpenIB BTL's RDMACM based connection setup mechanism is also
470- not thread safe. The default UDCM method should be used for applications
471- requiring MPI_THREAD_MULTIPLE support.
472+ The openib BTL's RDMACM based connection setup mechanism is also not
473+ thread safe. The default UDCM method should be used for
474+ applications requiring MPI_THREAD_MULTIPLE support.
475+
476+ Currently, MPI File operations are not thread safe even if MPI is
477+ initialized for MPI_THREAD_MULTIPLE support.
472478
473479- MPI_REAL16 and MPI_COMPLEX32 are only supported on platforms where a
474480 portable C datatype can be found that matches the Fortran type
@@ -605,12 +611,12 @@ Network Support
605611 - SMCUDA
606612 - Cisco usNIC
607613 - uGNI (Cray Gemini, Aries)
608- - vader (XPMEM, Linux CMA, Linux KNEM, and general shared memory)
614+ - vader (XPMEM, Linux CMA, Linux KNEM, and copy-in/copy-out shared memory)
609615
610616 - "cm" supports a smaller number of networks (and they cannot be
611617 used together), but may provide better overall MPI performance:
612618
613- - QLogic InfiniPath / Intel True Scale PSM
619+ - Intel True Scale PSM (QLogic InfiniPath)
614620 - Intel Omni-Path PSM2
615621 - Mellanox MXM
616622 - Portals4
0 commit comments