@@ -29,8 +29,42 @@ Open MPI version v6.0.0
2929 delivered through the Open MPI internal "OMPIO" implementation
3030 (which has been the default for quite a while, anyway).
3131
32- - Added MPI-4.1 ``MPI_Status_* `` functions.
32+ - Added support for MPI-4.1 functions to access and update ``MPI_Status ``
33+ fields.
3334
3435- MPI-4.1 has deprecated the use of the Fortran ``mpif.h `` include
3536 file. Open MPI will now issue a warning when the file is included
3637 and the Fortran compiler supports the ``#warning `` directive.
38+
39+ - Added support for the MPI 4.1 memory allocation kind info object and
40+ values introduced in the MPI Memory Allocation Kinds side-document.
41+
42+ - Added support for Intel Ponte Vecchio GPUs.
43+
44+ - Extended the functionality of the accelerator framework to support
45+ intra-node device-to-device transfers for AMD and NVIDIA GPUs
46+ (independent of UCX or libfabric).
47+
48+ - Added support for MPI sessions when using UCX.
49+
50+ - Added support for MPI 4.1. ``Request_get_status_all/any/some `` functions.
51+
52+ - Improvements to collective operations:
53+
54+ - Added new `xhc ` collective component to optimize shared memory collective
55+ operations using XPMEM.
56+
57+ - Added new `acoll ` collective component optimizing single-node
58+ collective operations on AMD ``Zen `` based processors.
59+
60+ - Added new algorithms to optimize Alltoall and Alltoallv in the
61+ ``han `` component when XPMEM is available.
62+
63+ - Introduced new algorithms and parameterizations for Reduce, Allgather,
64+ and Allreduce in the base collective component, and adjusted the ``tuned ``
65+ component to better utilize these collectives.
66+
67+ - Added new json file format to tune the ``tuned `` collective component.
68+
69+ - Extended the ``accelerator `` collective component to support
70+ more collective operations on device buffers.
0 commit comments