Pages

Wednesday, May 27, 2009

CS: YALL1, RecPF, KSVD, Low Rate Sampling for Time Delay, Fast local algos for NMF, CI of Subwavelength Structures, GAMM meeting, GPUmat freeware


Yin Zhang just let me know that he and his team have released "..substantial enhancements..to RecPF v2.0 and YALL1 v.b4..." in particular:

RecPF version 2.0 now has the following new features:
  • Switched to complex computation, fully compatible with complex data;
  • Replaced penalty parameter continuation by R. Glowinski and P.L. Tallec's alternating direction method;
  • New parameter/data normalization was added to make parameters rather independent of the image size, pixel intensity range, and number of CS measurements. To find the parameters aTV and aL1 for version 2.0 corresponding to those used in versions 1.1 or 1.0, see Lines 60, 66, and 67 of RecPF.m.

while YALL1 version beta-4 no longer requires A*A' = I. I am happy to see that the counter on YALL1 shows more than 150 visits to the site.

I'll add this information shortly to the reconstruction section of the big picture.


I just found a blog in Chinese on compressive sensing. It looks like the author is indeed reading the papers he is mentioning, this is great. In a different direction, some of the Chinese readers of this blog have difficulties getting to any blogs on blogger.com including this one. One way around this is to subscribe by E-mail to Nuit Blanche (see right side of this page).

Michael Elad just provided some context to his newly released paper that I mentioned earlier:
One of the most appealing results by Candes and Tao in their analysis of the Danzig Selector (DS) is the proof that it is log (and constant) factor away from an oracle estimator that knows the original support. Can a similar claim be made with respect to the Basis Pursuit (BP)? Together with Zvika Ben-Haim and Yonina Eldar we show that such a result is indeed atainable - see here. Furthermore, the constants obtained in our bounds are even better than those found in the DS analysis, suggesting (not claiming!!) that BP may be more accurate than DS.
The reference is: Zvika Ben-Haim, Yonina Eldar and Michael Elad, Near-Oracle Performance of Basis Pursuit Under Random Noise.

In a different direction we also have: Dictionaries for Sparse Representation Modeling byRon Rubinstein, Alfred Bruckstein, Michael Elad. The abstract reads:
Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: (i) building a sparsifying dictionary based on a mathematical model of the data; or (ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as Wavelets, Wavelet Packets, Contourlets, and Curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, as it is attached to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD and the K-SVD, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.
Ron Rubinstein has just released an update to OMP-Box and KSVD-Box, from his page:
OMP-Box v9 Implementation of the Batch-OMP and OMP-Cholesky algorithms for quick sparse-coding of large sets of signals.

KSVD-Box v10 Implementation of the K-SVD and Approximate K-SVD dictionary training algorithms, and the K-SVD Denoising algorithm. Requires OMP-Box v9.

Latest update (May 21 2009): OMP-Box v9 and KSVD-Box v10 have been released! This is a major update which includes the complete source code for compiling the MEX files on any Matlab platform (packages were tested on Matlab 2007 and 2008 versions). Also included in this update are new demos, readme and faq files, and more!
I'll add this information in the dictionary section of the big picture.

Also found on the interwebs:

Low Rate Sampling Schemes for Time Delay Estimation by Kfir Gedalyahu, Yonina Eldar. The abstract reads:
Time delay estimation arises in many applications in which a multipath medium has to be identified from pulses transmitted through the channel. Various approaches have been proposed in the literature to identify time delays introduced by multipath environments. However, these methods either operate on the analog received signal, or require high sampling rates in order to achieve reasonable time resolution. In this paper, our goal is to develop a unified approach to time delay estimation from low rate samples of the output of a multipath channel. Our methods result in perfect recovery of the multipath delays from samples of the channel output at the lowest possible rate, even in the presence of overlapping transmitted pulses. This rate depends only on the number of multipath components and the transmission rate, but not on the bandwidth of the probing signal. In addition, our development allows for a variety of different sampling methods. By properly manipulating the low-rate samples, we show that the time delays can be recovered using the well-known ESPRIT algorithm. Combining results from sampling theory with those obtained in the context of direction of arrival estimation methods, we develop necessary and sufficient conditions on the transmitted pulse and the sampling functions in order to ensure perfect recovery of the channel parameters at the minimal possible rate.

Fast local algorithms for large scale Nonnegative Matrix and Tensor Factorizations by A. Cichocki and A-H. Phan. The abtract reads:
Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multisensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the Alpha and Beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical
NMF approach [3].

This article is interesting on many levels: Compressive Imaging of Subwavelength Structures by Albert Fannjiang. The abstract reads:
The problem of imaging periodic structure (source or scatterer) is formulated in the framework of compressed sensing with special attention on subwavelength resolution. It is shown that in this formulation the subwavelength resolution in the presence of noise can not be achieved by compressed sensing techniques alone. Additional techniques such as near-field measurement or illumination are required and the resolution limit is derived, which says that the smallest stably resolvable scale is about either half the wavelength or the distance of the sensors to the target, whichever is smaller. Numerical simulations are consistent with the theoretical prediction.

Some reconstruction hardware use graphics card to accelerate the reconstruction process, here is some good news from GPGPU.org:


GPUmat, developed by the GP-You Group, allows Matlab code to benefit from the compute power of modern GPUs. It is built on top of NVIDIA CUDA. The acceleration is transparent to the user, only the declaration of variables needs to be changed using new GPU-specific keywords. Algorithms need not be changed. A wide range of standard Matlab functions have been implemented. GPUmat is available as freeware for Windows and Linux from the GP-You download page.







Finally, there is a meeting that I'll add later to the CS calendar:

GAMM-Seminar on Tensor Approximations, Leipzig, Germany, Feb 2010

The 26th GAMM Seminar in Leipzig will take place from
Monday, February 22nd until Wednesday February 24th 2010.

The topic of the workshop is "Tensor Approximations"
and related topics, e.g.,
* Low Rank Tensor Approximations
* Sparse Grids
* Compressed Sensing and Data-Sparse Recovery
* PDEs with Stochastic Parameters
* Numerical Methods for Quantum-Chemistry
* Applications in High Dimensions

Invited speakers are
* Pierre Comon (Universite Nice)
* Lieven De Lathauwer (KU Leuven)
* Lars Elden (Linkoeping University)
* Markus Hegland (Australian National University)
* Christian Lubich (Universitaet Tuebingen)
* Reinhold Schneider (TU Berlin)

See also the webpage
http://www.mis.mpg.de/scicomp/gamm26/index.html
Registration is open.
Creidt: NASA/JPL/Space Science Institut, Saturn's ring as seen from Cassini two days ago.

No comments:

Post a Comment