Pages

Monday, August 17, 2015

The inverse fast multipole method: using a fast approximate direct solver as a preconditioner for dense linear systems



If you know the influence the Fast Multipole Method has had on Engineering and Science in the past two decades, you know that any improvement is a big deal. It is nice to see it coming from a decomposition that is not unknown to us. From the paper:

The first step in obtaining such a solver is to exploit the matrix’H2-structure in order to convert the original dense system into an extended sparse system, as solving the latter is more efficient from a computational point of view. The sparsification procedure is based on recursively decomposing a dense matrix into a sparse matrix (representing the “near field” interactions) and a low-rank approximation (representing the “far field” interactions), combined with the introduction of auxiliary variables. This is done in a multilevel fashion. Similar ideas have been considered for HSS matrices in [13,31].

The recursion on that decomposition is not something we have not seen either. It parallels several works featured in Sunday Morning Insight: Matrix Factorizations and the Grammar of Life (of related interest Sunday Morning Insight: Of Fluid Autoencoders and Data Tsunamis). Without further ado, here is: The inverse fast multipole method: using a fast approximate direct solver as a preconditioner for dense linear systems by Pieter Coulier, Hadi Pouransari, Eric Darve

Although some preconditioners are available for solving dense linear systems, there are still many matrices for which preconditioners are lacking, in particular in cases where the size of the matrix N becomes very large. There remains hence a great need to develop general purpose preconditioners whose cost scales well with the matrix size N. In this paper, we propose a preconditioner with broad applicability and with cost O(N) for dense matrices, when the matrix is given by a smooth kernel. Extending the method using the same framework to general H2-matrices is relatively straightforward. These preconditioners have a controlled accuracy (machine accuracy can be achieved if needed) and scale linearly with N. They are based on an approximate direct solve of the system. The linear scaling of the algorithm is achieved by means of two key ideas. First, the H2-structure of the dense matrix is exploited to obtain an extended sparse system of equations. Second, fill-ins arising when performing the elimination are compressed as low-rank matrices if they correspond to well-separated interactions. This ensures that the sparsity pattern of the extended sparse matrix is preserved throughout the elimination, hence resulting in a very efficient algorithm with O(Nlog(1/ε)2) computational cost and O(Nlog1/ε) memory requirement, for an error tolerance 0<ε<1. The solver is inexact, although the error can be controlled and made as small as needed. These solvers are related to ILU in the sense that the fill-in is controlled. However, in ILU, most of the fill-in is simply discarded whereas here it is approximated using low-rank blocks, with a prescribed tolerance. Numerical examples are discussed to demonstrate the linear scaling of the method and to illustrate its effectiveness as a preconditioner.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment