Pages

Tuesday, March 04, 2014

Introduction into cross approximation / Fast multidimensional convolution in low-rank formats via cross approximation - implementation -

Ivan Oseledets just mentioned the release of a newer algorithm within the TT-toolbox that performs a new Matrix/Tensor Factorization and he calls it the cross approximation From the page:

Cross approximation: intro

Posted on: 2014-02-26 00:00:00

Skeleton decomposition

Cross approximation and skeleton decomposition are one of the basic concepts in our research. It is all about low-rank matrices which appear in many different applications: integral equations, tensor approximations and many others. Cross approximation is simple and elegant, but it is not widely known as it should be. Suppose that an n×m matrix A has rank r. Then it can be exactly recovered from r columns and r rows as
A=CAˆ1R,
where C are r columns of AR are r rows of A and Aˆ is a submatrix on their intersection. That means, that only (n+m)relements are required. What about the stability of this decomposition? Suppose A can be approximated with low rank only approximately:
A=Ar+E,
with E=ε. How to select an good submatrix. A good guide is the maximum volume : among all r×r submatrices select the one that has largest volume, i.e. the maximum absolute value of the determinant. Then, the corresponding approximation
AAmaxvolC(r+1)σr+1,
where σr+1 is the (r+1)-th singular value of A and the error is measured in the Chebyshev norm (maximum absolute value).

Cross approximation

A good question is how to compute (quasi)-optimal submatrices. One the possibility is the cross approximation algorithm, which is equivalent to the Gaussian elimination. The steps of the algorithm are simple:
  1. Find some pivot (i,j)
  2. Subtract a rank-1 cross from the matrix:
    Aij:=AijAijAijAij.
  3. Cycle until the norm of A is small enough.
Different pivoting strategies are possible. The full pivoting assumes that (i,j) maximizes the absolute value of the residue. This involves O(nm) complexity and is unacceptable. Partial pivoting can be different: from the simplest row pivoting scheme to rook schemes and random sampling.

Maximum-volume

Maximum-volume principle is the cornerstone of the low-rank approximation algorithms. Although it is often claimed that maximum-volume submatrix is hard to compute, there are very efficient algorithms for doing that. Maybe the most promiment one is the algorithm that computes a maximum-volume submatrix in a tall n×r matrix. It is an iterative algorithm: it starts from a non-singular r×r submatrix and then substitutes one row at a step (greedy approach). For this problem, the convergence is very fast. There are few tricks to make it really efficient. The efficient implementations of the maxvol algorithm are available both in the matlab and Python version of the tt-Toolbox.
From the code page:
Tensor trains
The basic operations with tensors in Tensor Train (tt) format have been implemented in
Block low-rank approximation techniques for large dense matrices
the latest arxiv shows one use of it:

Fast multidimensional convolution in low-rank formats via cross approximation by M. V. Rakhuba, I. V. Oseledets

We propose a new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact in different applications. The key idea is based on applying cross approximation in the "frequency domain", where convolution becomes a simple elementwise product. We illustrate efficiency of our algorithm by computing the three-dimensional Newton potential and by presenting preliminary results for solution of the Hartree-Fock equation on tensor-product grids.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Post a Comment