Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications

The Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the...

Full description

Bibliographic Details
Main Authors: Ahmad, Wakeel (Author), Carpenter, Bryan (Author), Shafi, Aamir (Author), Shafi, Muhammad Aamir (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Elsevier, 2015-03-10T16:31:21Z.
Subjects:
Online Access:Get fulltext
LEADER 02798 am a22002053u 4500
001 95929
042 |a dc 
100 1 0 |a Ahmad, Wakeel  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Shafi, Muhammad Aamir  |e contributor 
700 1 0 |a Carpenter, Bryan  |e author 
700 1 0 |a Shafi, Aamir  |e author 
700 1 0 |a Shafi, Muhammad Aamir  |e author 
245 0 0 |a Collective Asynchronous Remote Invocation (CARI): A High-Level and Effcient Communication API for Irregular Applications 
260 |b Elsevier,   |c 2015-03-10T16:31:21Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/95929 
520 |a The Message Passing Interface (MPI) standard continues to dominate the landscape of parallel computing as the de facto API for writing large-scale scientific applications. But the critics argue that it is a low-level API and harder to practice than shared memory approaches. This paper addresses the issue of programming productivity by proposing a high-level, easy-to-use, and effcient programming API that hides and segregates complex low-level message passing code from the application specific code. Our proposed API is inspired by communication patterns found in Gadget-2, which is an MPI-based parallel production code for cosmological N-body and hydrodynamic simulations. In this paper-we analyze Gadget-2 with a view to understanding what high-level Single Program Multiple Data (SPMD) communication abstractions might be developed to replace the intricate use of MPI in such an irregular application-and do so without compromising the effciency. Our analysis revealed that the use of low-level MPI primitives-bundled with the computation code-makes Gadget-2 diffcult to understand and probably hard to maintain. In addition, we found out that the original Gadget-2 code contains a small handful of-complex and recurring-patterns of message passing. We also noted that these complex patterns can be reorganized into a higherlevel communication library with some modifications to the Gadget-2 code. We present the implementation and evaluation of one such message passing pattern (or schedule) that we term Collective Asynchronous Remote Invocation (CARI). As the name suggests, CARI is a collective variant of Remote Method Invocation (RMI), which is an attractive, high-level, and established paradigm in distributed systems programming. The CARI API might be implemented in several ways-we develop and evaluate two versions of this API on a compute cluster. The performance evaluation reveals that CARI versions of the Gadget-2 code perform as well as the original Gadget-2 code but the level of abstraction is raised considerably. 
546 |a en_US 
655 7 |a Article 
773 |t Procedia Computer Science