Ivan Oseledets just mentioned the release of a newer algorithm within the TT-toolbox that performs a new Matrix/Tensor Factorization and he calls it the cross approximation From the page:

## Cross approximation: intro

Posted on: 2014-02-26 00:00:00### Skeleton decomposition

*Cross approximation*and

*skeleton decomposition*are one of the basic concepts in our research. It is all about low-rank matrices which appear in many different applications: integral equations, tensor approximations and many others. Cross approximation is simple and elegant, but it is not widely known as it should be. Suppose that an

*exactly recovered*from

*maximum volume*: among all

### Cross approximation

A good question is how to compute (quasi)-optimal submatrices. One the possibility is the cross approximation algorithm, which is equivalent to the Gaussian elimination. The steps of the algorithm are simple:

- Find some pivot
(i′,j′) - Subtract a rank-
cross from the matrix:1 Aij:=Aij−Aij′Ai′jAi′j′. - Cycle until the norm of
is small enough.A

Different pivoting strategies are possible. The full pivoting assumes that (i′,j′) maximizes the absolute value of the residue. This involves O(nm) complexity and is unacceptable. Partial pivoting can be different: from the simplest row pivoting scheme to rook schemes and random sampling.

### Maximum-volume

Maximum-volume principle is the cornerstone of the low-rank approximation algorithms. Although it is often claimed that maximum-volume submatrix is hard to compute, there are very efficient algorithms for doing that. Maybe the most promiment one is the algorithm that computes a maximum-volume submatrix in a tall n×r matrix. It is an iterative algorithm: it starts from a non-singular r×r submatrix and then substitutes one row at a step (greedy approach). For this problem, the convergence is very fast. There are few tricks to make it really efficient. The efficient implementations of the

**maxvol**algorithm are available both in the matlab and Python version of the tt-Toolbox.
From the code page:

the latest arxiv shows one use of it:Tensor trains

The basic operations with tensors in Tensor Train (tt) format have been implemented inBlock low-rank approximation techniques for large dense matrices

Fast multidimensional convolution in low-rank formats via cross approximation by M. V. Rakhuba, I. V. Oseledets

We propose a new cross-conv algorithm for approximate computation of convolution in different low-rank tensor formats (tensor train, Tucker, Hierarchical Tucker). It has better complexity with respect to the tensor rank than previous approaches. The new algorithm has a high potential impact in different applications. The key idea is based on applying cross approximation in the "frequency domain", where convolution becomes a simple elementwise product. We illustrate efficiency of our algorithm by computing the three-dimensional Newton potential and by presenting preliminary results for solution of the Hartree-Fock equation on tensor-product grids.

Join the CompressiveSensing subreddit or the Google+ Community and post there !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment