Leveraging task-parallelism in message-passing dense matrix factorizations using SMPSs

Alberto F. Martín, Ruymán Reyes, Rosa M. Badia, Enrique S. Quintana-Ortí*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

In this paper, we investigate how to exploit task-parallelism during the execution of the Cholesky factorization on clusters of multicore processors with the SMPSs programming model. Our analysis reveals that the major difficulties in adapting the code for this operation in ScaLAPACK to SMPSs lie in algorithmic restrictions and the semantics of the SMPSs programming model, but also that they both can be overcome with a limited programming effort. The experimental results report considerable gains in performance and scalability of the routine parallelized with SMPSs when compared with conventional approaches to execute the original ScaLAPACK implementation in parallel as well as two recent message-passing routines for this operation. In summary, our study opens the door to the possibility of reusing message-passing legacy codes/libraries for linear algebra, by introducing up-to-date techniques like dynamic out-of-order scheduling that significantly upgrade their performance, while avoiding a costly rewrite/reimplementation.

Original languageEnglish
Pages (from-to)113-128
Number of pages16
JournalParallel Computing
Volume40
Issue number5-6
DOIs
Publication statusPublished - May 2014
Externally publishedYes

Fingerprint

Dive into the research topics of 'Leveraging task-parallelism in message-passing dense matrix factorizations using SMPSs'. Together they form a unique fingerprint.

Cite this