Parallel MCGLS and ICGLS methods for least squares problems on distributed memory architectures

Laurence Tianruo Yang*, Richard P. Brent

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

In this paper we mainly study the parallelization of the CGLS method, a basic iterative method for large and sparse least squares problems in which the conjugate gradient method is applied to solve normal equations. On modern parallel architectures its parallel performance is always limited because of the global communication required for inner products, the main bottleneck of parallel performance. In this paper, we describe a modified COLS (MCGLS) method which improve parallel performance by assembling the results of a number of inner products collectively and by creating situations where communication can be overlapped with computation. More importantly, we also propose an improved CGLS (ICGLS) method to reduce inner product's global synchronization points to half, then significantly improve the parallel performance accordingly compared with the standard CGLS method and the MCGLS method.

Original languageEnglish
Pages (from-to)145-156
Number of pages12
JournalJournal of Supercomputing
Volume29
Issue number2
DOIs
Publication statusPublished - Aug 2004
Externally publishedYes

Fingerprint

Dive into the research topics of 'Parallel MCGLS and ICGLS methods for least squares problems on distributed memory architectures'. Together they form a unique fingerprint.

Cite this