News: Open-MX maintenance ending. (2015/12)
News: Clustermonkey reports Open-MX use in the Limilus Project for Switchless 10G experiments with Bonded Loop. (2013/04/11)
News: The Research, Computing & Engineering website hosts a podcast interview of Open-MX project leader. (2009/03/15)
News: Linux Magazine talks about Open-MX in an article about the Good old Ethernet. (2009/02/25)
News: Clustermonkey published an article about Open-MX and links to the videos of an Open-MX talk that was recently given at the STFC Daresbury Laboratory in UK. (2009/01/15)
Open-MX is a high-performance implementation of the Myrinet Express message-passing stack over generic Ethernet networks. It provides application-level with wire-protocol compatibility with the native MXoE (Myrinet Express over Ethernet) stack.
The following middleware are known to work flawlessly on Open-MX using their native MX backend thanks to the ABI and API compatibility: Open MPI, Argonne's MPICH2/Nemesis, Myricom's MPICH-MX and MPICH2-MX, PVFS2, Intel MPI (using the new TMI interface), Platform MPI (formerly known as HP-MPI), NewMadeleine, and NetPIPE.
The design of Open-MX is described in several papers.
The FAQ contains answers to many questions about Open-MX usage, configuration, and so on.
Open-MX implements:
The following features will be available in the near future:
Requirements:
To get the latest Open-MX news, or for discussion regarding Open-MX development, you should subscribe to the open-mx mailing list. See also the news archive.
If you need help to tune your installation for Open-MX, please refer to the Performance tuning section of the FAQ.
Multiple raw outputs of the Intel MPI Benchmarks and NetPIPE are available below using Open MPI or MPICH-MX on top of Open-MX. For comparison purpose, the performance of the BTL TCP component of Open MPI is also given (using the exact same configuration of the host and its 10G interface). For configuration details, see the headers of the corresponding output file. Direct access to all raw performance numbers is available here.
MPICH-MX/Open-MX | Open MPI/Open-MX | Open MPI/TCP | |
---|---|---|---|
NetPipe (link width is 9491Mbps) | 7.05µs - 9367Mbps | 7.22µs - 9106Mbps | 15.07µs - 6462Mbps |
IMB (link width is 1186MiB/s) | 7.02µs - 1160MiB/s | 7.21µs - 1124MiB/s | 14.54µs - 825MiB/s |
The Open-MX latency depends on the processor frequency. For instance, if you replace the 2.33 GHz "Clovertown" Xeon (E5345) in the above tests with some 3.16 GHz "Harpertown" (X5460), the latency drops to 6.18µs.
IMB performance on Gigabit Ethernet interfaces (Broadcom bnx2) with Open MPI/Open-MX and Open MPI/TCP.
Since Open-MX also provides an efficient shared-memory communication model, the IMB performance on top of MPICH-MX is also available for the following runs:
The FAQ contains answers to many questions about Open-MX usage, configuration, and so on. Bug reports and questions should be posted as Gitlab Issues or on the open-mx mailing list. See the end of README.md in the source tree for details.
Open-MX was developed by Inria Bordeaux Research Centre (former Runtime team-project)
in collaboration with Myricom, Inc.
The main contributors are Brice Goglin,
Nathalie Furmento,
and Ludovic Stordeur.
Open-MX development resources are maintained on the Inria Gitlab project.
Last updated on 2022/03/03.