Some of you recall that the Non-Negative Matrix Factorization (NMF) as one of the technique that could help in building dictionaries from training datasets: Gabriel Peyre, for instance, uses it to perform decomposition of textures, while Bob Plemmons applies it to hyperspectral detection of space junk (explained here). These dictionaries are then used to reconstruct signals in Compressed Sensing provided these signals have been acquired incoherently from the said dictionaries. The NMF algorithm was developed, starting with the work featured in the Nature article of Daniel Lee and Sebastian Seung entitled Learning the parts of objects by non-negative matrix factorization) and yet it never seemed to "look like" the other algorithms since as Gabriel Peyre points out:
The problem of learning D and X requires to minimize the reconstruction error ||Y − DX|| subject to the constraints D,X \ge 0. This minimization is not convex on both D and X, but following [22], an alternate minimization on each matrix can be carried using iterations of the following two steps:....An explicit sparsity prior can be added into this process [23] to enforce the constraints.... required by equation (1). In practice, this did not result in a noticeable enhancement for texture modeling.
i.e. the problem is not convex hence the use of specific iterative techniques is warranted making ithe algorithm somewhat hard to generalize. Let us note (like we did earlier) that, by some sort of miracle and as noted by Gabriel, this technique also seems to converge toward the sparser solution. Well, a new paper entitled:
Non-Negative Matrix Factorization, Convexity and Isometry by Nikolaos Vasiloglou, Alexander Gray, David Anderson features an NMF algorithm using Linear Programming techniques. This is fascinating when one knows the importance of LP in CS in the reconstruction stage. The abstract reads:
In this paper we explore avenues for improving the reliability of dimensionality reduction methods such as Non-Negative Matrix Factorization (NMF) as interpretive exploratory data analysis tools. We first explore the difficulties of the optimization problem underlying NMF, showing for the first time that non-trivial NMF solutions always exist and that the optimization problem is actually convex, by using the theory of Completely Positive Factorization. We subsequently explore four novel approaches to finding globally-optimal NMF solutions using various ideas from convex optimization. We then develop a new method, isometric NMF (isoNMF), which preserves non-negativity while also providing an isometric embedding, simultaneously achieving two properties which are helpful for interpretation. Though it results in a more difficult optimization problem, we show experimentally that the resulting method is scalable and even achieves more compact spectra than standard NMF.
Contacted on the availability of isoNMF, one of the authors, Nick tells me that:
IsoNMF along with other machine learning algorithms will be available in the Fastlib/MlPaack library that has be pre-released and it will have it's first official release by December.I'll keep an eye on that.
Here is a fascinating paper by Gabriel Peyre, Sebastien Bougleux, Laurent Cohen entitled Non-local Regularization of Inverse Problems. The abstract reads:
This article proposes a new framework to regularize linear inverse problems using the total variation on non-local graphs. This nonlocal graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the non-local graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, super-resolution and compressive sampling.
This approach is likely to be applicable beyond just image processing and for higher dimensional signals. It is fascinating for the following points:
- instead of using the sparsity of the unknown as regularization tool, the authors use a function based on the connection between pixels of the image. Using surrounding pixels to produce a higher dimensional space and then decreasing that space is at the heart of three dimensional cue extraction from monocular views. I guess my point is that when using surrounding pixels, one is grabbing information about depth, i.e. a new dimension that can be inferred from a 2-d projection.
- it seems to be an algorithm that should intrinsically be well suited for manifolds as diffusion method have been shown to do well in a variety of dimensionality reduction tasks (see Ronald Coifman, in Diffusion Geometries and Harmonic Analysis on Data Sets.)
I look forward to some toolbox implementing this technique.
Valerio Cambareri mentioned yesterday has made his presentation available in pdf and ppt format.
And finally, Laurent Duval mentions to me that, next month, there is a conference at Ecole Polytechnique in Paris that features some people I have mentioned in this blog before. The conference is called Emerging Trends in Visual Computing (ETVC'08) and will take place on November 18th-20th. It is in the calendar. Registration is free until the November 1st, 2008. The program list the following talks of potential interest:
- Francis BACH (INRIA-ENS/Mines)
Machine learning and kernel methods for computer vision - Frederic BARBARESCO (Thales Air Systems, France)
Applications of information geometry to radar signal processing (while this is not related to sparse sampling or CS, I mentioned him before because of his work in norm definition on a manifold) - Stephane MALLAT (Ecole Polytechnique, Centre de Mathematiques Appliquees, CMAP, France)
Sparse Geometric Super-Resolution - Ramesh RASKAR (MIT Media Lab, USA)
Computational Photography: Epsilon to Coded Imaging - Martin VETTERLI (School of Computer and Communication Sciences & Dept. Elect. Eng. and Computer Sciences, EPFL, Switzerland)
Sparse Sampling: Variations on a Theme by Shannon
No comments:
Post a Comment