You are here: Home ExaMPI13 - Exascale MPI 2013

ExaMPI13 - Exascale MPI 2013

ExaMPI13 - Workshop on Exascale MPI at SC13

Workshop program:

8.30 – 8.40 Introduction and Goals of the Workshop – E. Laure/M. Parsons/S. Markidis

8.40 – 9.20 Keynote Talk: MPI beyond 3.0 and towards Larger-scale Computing – T. Hoefler

9.20 – 9.40 Large Scale Message-Passing Challenges in EPiGRAM – J. Träff

9.40 – 10.00 Message-Passing Challenges in DEEP’s Heterogeneous Cluster-Booster Architecture – N. Eicker

10.00 – 10.30 Coffee break

10.30 – 11.10 Invited Talk: Message Passing to GPU and Intel MIC – P. Balaji

11.10 – 11.50 Invited Talk: Interaction of PGAS and Message-Passing Programming Models at Exascale – C. Simmendinger

11.50 – 12.00 Concluding Remarks and Future of the Workshop  - E. Laure/M. Parsons/S. Markidis

November 22nd, 9.00 - 13.00 2013 Denver, Colorado, USA

Co-located with IEEE/ACM International Conference for High Performance Computing, Networking, Storage and Analysis (SC13)

The MPI design and its main implementations have proved surprisingly scalable. Some definite issues that hampered scalability have been addressed in the MPI 2.1 – 2.2 definition process, and continued into MPI 3.0. Thus MPI has been robust, and been able to evolve, without fundamentally changing the model and specification. For this and many other reasons MPI is currently the de-facto standard for HPC systems and applications.
However, there is a need for re-examination of the message-passing model for extreme-scale systems characterized by asymptotically decreasing local memory and highly localized communication networks. Likewise, there is a need for exploring new innovative and potentially disruptive concepts and algorithms partially to explore other roads than those taken by the recently released MPI 3.0 standard.
The aim of workshop is to bring together developers and researchers to present and discuss innovative algorithms and concepts in Message Passing programming models, in particular related to MPI.

Topics of interest (but are not limited to):

  • Development of scalable Message Passing collective operations.
  • Communication topology mapping interfaces and algorithms.
  • Innovative algorithms for scheduling/routing to avoid network congestion.
  • Integrated use of structured data layout descriptors.
  • One-sided communication models and RDMA-based MPI.
  • MPI and threading and threading requirements to OS.
  • Interoperability of Message Passing and PGAS models.
  • Integration of task-parallel models into Message Passing models.
  • Fault tolerance in MPI.
  • MPI I/O.


Paper Submission and Publication

Authors are invited to submit original manuscripts in English structured as technical papers not exceeding 8 letter size (8.5x11) pages including figures, tables, and references using the IEEE format for conference proceedings. Reference style files are available at <>.

All manuscripts will be reviewed and judged on correctness, originality, technical strength, and significance, quality of presentation, and interest and relevance to the workshop attendees. Submitted original papers must represent original unpublished research that is not currently under review for any other conference or journal. At least one author of an accepted paper must register for and attend the workshop. Papers should be submitted electronically at: <>.

Important Dates

  • Paper submission: September 14, 2013
  • Acceptance notification: October 13, 2013
  • Final papers due: November 10, 2013


  • Erwin Laure, KTH Royal Institute of Technology, Sweden
  • Stefano Markidis, KTH Royal Institute of Technology, Sweden
  • Jesper Larsson Träff, Vienna University of Technology, Austria
  • Ewing Lusk, Argonne National Laboratory, IL, USA
  • Marc Snir, University of Illinois at Urbana Champaign, IL, USA
  • Yutaka Ishikawa, University of Tokyo, Japan
  • Mark Parsons, Edinburgh Parallel Computing Center, UK
  • Lorna Smith, Edinburgh Parallel Computing Center, UK
  • Stephen Booth, Edinburgh Parallel Computing Center, UK
  • Daniel Holmes, Edinburgh Parallel Computing Center, UK
  • Mirko Rahn, Fraunhofer Society, Germany


This Workshop is supported by the CRESTA and EPiGRAM EC projects.