Pages

Thursday, April 29, 2010

CS: Compressive Coded Apertures for High-Resolution Imaging and the Spring 2010 MATLAB Programming Contest featuring Compressive Sensing

In conjunction with the entry of the extraordinary Reinterpretable Imager of Amit Agrawal, Ashok Veeraraghavan and Ramesh Raskar. I caught up a discussion between Zachary Harmany and Eric Trammel on twitter that started with:
@fujikanaeda @igorcarron Much of CS requires very high SNRs to work well. We learned this when working on compressive coded aperture imaging
To what I replied about the excellent response from Gerry Skinner on the subject as featured in this entry: A Short Discussion with Gerry Skinner, a Specialist in Coded Aperture Imaging of specific interest is this illuminating paper.

More specifically, I was interested in hearing Zachary tell me more about coded aperture especially in the context of compressive sensing providing new tools to this old problem. As one can see from Gerry Skinner's interview there is/was some distrust of nonlinear reconstruction methods because, it seems to me, they are/were not grounded in a good framework. Compressive Sensing changes all that really and I am very kin on hearing how these new nonlinear techniques can go in a direction people did not dare going before because it was not grounded theoretically. More specifically, when coded aperture started about 50 years ago, we did not have harmonic functions like wavelets, so can the whole dictionary business help us ? The most sophisticated older methods used some type of greedy schemes: It looks like we have better schemes even some that are looking at solving the l_0 problem instead of just the l_1. We also have some approaches to quantization and structured sparsity: how can we use those in new schemes that would make coded aperture a full fledged imaging system ? Finally, as one can see from yesterday's entry, we are now looking at 3-D data and compression in time and while sometimes SNR is of paramount importance, other times one is only interested not in the best picture possible but rather in a good-enough quality. How do CS reconstruction methods highlighted above could provide so-so results whereas linear methods like MURA could not ? As it happens, Zachary pointed out to the paper they had recently presented in Europe. Yes the one that got them stuck a little longer due to the cloud. So with no further due, here it is:

Compressive Coded Apertures for High-Resolution Imaging by Roummel Marcia, Zachary Harmany, and Rebecca Willett. The abstract reads:
Traditionally, optical sensors have been designed to collect the most directly interpretable and intuitive measurements possible. However, recent advances in the fields of image reconstruction, inverse problems, and compressed sensing indicate that substantial performance gains may be possible in many contexts via computational methods. In particular, by designing optical sensors to deliberately collect “incoherent” measurements of a scene, we can use sophisticated computational methods to infer more information about critical scene structure and content. In this paper, we explore the potential of physically realizable systems for acquiring such measurements. Specifically, we describe how given a fixed size focal plane array, compressive measurements using coded apertures combined with sophisticated optimization algorithms can significantly increase image quality and resolution.

The website for the Compressive Coded Aperture project at Duke is here.


I am glad that Roummel, Zachary, and Rebecca are on the right side of the force. To understand what the video shows, you need to go the CCA site.

Thanks Zachary and Eric for the discussion.

Nicolas Cusseau just mentioned to me that Mathworks, the maker of Matlab, is having a contest that features producing a reconstruction solver for a series of compressive sensing problems. The encoding uses elements from the {0,1} set. The rules are here:

Spring 2010 MATLAB Programming Contest, April 28-May 5 2010, Compressive Sensing is the 21st MATLAB Online Programming Contest.
The prize is a matlab licence. Hurry up it'll be over next week on May 5th.

Thanks Nicolas

Wednesday, April 28, 2010

CS: Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography

Ramesh Raskar just let me know of their new outstanding multiplexing Imager which is described in the following video:



We describe a novelmultiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing.

A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.


The webpage for the project is here.

Of interest is the implementation







I am looking forward to seeing the reconstruction stage but it seems to me that, for now, they do not seem to be using compressive sensing reconstruction techniques that make use of the fact that the scene is sparse in some fashion. But let us not mistake ourselves here, this Reinterpretable Imager is already performing compressed sensing. The subtle issue is whether we can get more data out of this system using some of the powerful compressive sensing solvers that come with the assumption that the scene is sparse. I note the authors` mention of the Random Lens Imager. When I see the images produced by this camera, I cannot but be reminded of Roummel Marcia and Rebecca Willett's slides ( the paper is Compressive Coded Aperture Superresolution Image Reconstruction) or this more recent video (see here for more information on Compressive Coded Aperture work at Duke/UMerced) where it is shown that coded aperture can use compressive sensing techniques to provide better de-multiplexing than traditional inversion methods. In a previous entry, I noted that Ramesh and his team had made some headways in the world of compressive sensing (Coded Strobing Photography: Compressive Sensing of High-speed Periodic Events, "Things could be worse"). Hopefully, once they do for this type of Imager, one would hope to get more information rather than simply giving away space resolution for angle and time as it seems to be the case right now.

I am repeating myself but Ramesh continues to advertize for people to join his group:

We are actively looking for Graduate Students, MEng, PostDocs and UROPs starting Spring and Fall 2010. Please see here if you are interested.


Forget your 9-to-5 job, go become a starving student at MIT, this is the Future!

CS: The long post of the week.


Let us start today's post with two presentation made at the latest Netsci workshop:
The following three papers just appeared on arxiv (by the way does anybody know how to use arxiv trackbacks and Blogger ?)

Data Stream Algorithms for Codeword Testing by Atri Rudra, Steve Uurtamo. The abstract reads:
Motivated by applications in storage systems and property testing, we study data stream algorithms for local testing and tolerant testing of codes. Ideally, we would like to know whether there exist asymptotically good codes that can be local/tolerant tested with one-pass, poly-log space data stream algorithms. We show that for the error detection problem (and hence, the local testing problem), there exists a one-pass, log-space data stream algorithm for a broad class of asymptotically good codes, including the Reed-Solomon (RS) code and expander codes. In our technically more involved result, we give a one-pass, $O(e\log^2{n})$-space algorithm for RS (and related) codes with dimension $k$ and block length $n$ that can distinguish between the cases when the Hamming distance between the received word and the code is at most $e$ and at least $a\cdot e$ for some absolute constant $a>1$. For RS codes with random errors, we can obtain $e\le O(n/k)$. For folded RS codes, we obtain similar results for worst-case errors as long as $e\le (n/k)^{1-\eps}$ for any constant $\eps>0$. These results follow by reducing the tolerant testing problem to the error detection problem using results from group testing and the list decodability of the code. We also show that using our techniques, the space requirement and the upper bound of $e\le O(n/k)$ cannot be improved by more than logarithmic factors.

A Sparse Johnson--Lindenstrauss Transform by Anirban Dasgupta, Ravi Kumar, Tamás Sarlós. The abstract reads:
Dimension reduction is a key algorithmic tool with many applications including nearest-neighbor search, compressed sensing and linear algebra in the streaming model. In this work we obtain a {\em sparse} version of the fundamental tool in dimension reduction --- the Johnson--Lindenstrauss transform. Using hashing and local densification, we construct a sparse projection matrix with just $\tilde{O}(\frac{1}{\epsilon})$ non-zero entries per column. We also show a matching lower bound on the sparsity for a large class of projection matrices. Our bounds are somewhat surprising, given the known lower bounds of $\Omega(\frac{1}{\epsilon^2})$ both on the number of rows of any projection matrix and on the sparsity of projection matrices generated by natural constructions.
Using this, we achieve an $\tilde{O}(\frac{1}{\epsilon})$ update time per non-zero element for a $(1\pm\epsilon)$-approximate projection, thereby substantially outperforming the $\tilde{O}(\frac{1}{\epsilon^2})$ update time required by prior approaches. A variant of our method offers the same guarantees for sparse vectors, yet its $\tilde{O}(d)$ worst case running time matches the best approach of Ailon and Liberty.

Segmented compressed sampling for analog-to-information conversion: Method and performance analysis by Omid Taheri, Sergiy A. Vorobyov. The abstract reads:
A new segmented compressed sampling method for analog-to-information conversion (AIC) is proposed. An analog signal measured by a number of parallel branches of mixers and integrators (BMIs), each characterized by a specific random sampling waveform, is first segmented in time into $M$ segments. Then the sub-samples collected on different segments and different BMIs are reused so that a larger number of samples than the number of BMIs is collected. This technique is shown to be equivalent to extending the measurement matrix, which consists of the BMI sampling waveforms, by adding new rows without actually increasing the number of BMIs. We prove that the extended measurement matrix satisfies the restricted isometry property with overwhelming probability if the original measurement matrix of BMI sampling waveforms satisfies it. We also show that the signal recovery performance can be improved significantly if our segmented AIC is used for sampling instead of the conventional AIC. Simulation results verify the effectiveness of the proposed segmented compressed sampling method and the validity of our theoretical studies.

Pooling Design and Bias Correction in DNA Library Screening by Takafumi Kanamori, Hiroaki Uehara, Hiroaki Uehara. The abstract reads:
We study the group test for DNA library screening based on probabilistic approach. Group test is a method of detecting a few positive items from among a large number of items, and has wide range of applications. In DNA library screening, positive item corresponds to the clone having a specified DNA segment, and it is necessary to identify and isolate the positive clones for compiling the libraries. In the group test, a group of items, called pool, is assayed in a lump in order to save the cost of testing, and positive items are detected based on the observation from each pool. It is known that the design of grouping, that is, pooling design is important to %reduce the estimation bias and achieve accurate detection. In the probabilistic approach, positive clones are picked up based on the posterior probability. Naive methods of computing the posterior, however, involves exponentially many sums, and thus we need a device. Loopy belief propagation (loopy BP) algorithm is one of popular methods to obtain approximate posterior probability efficiently. There are some works investigating the relation between the accuracy of the loopy BP and the pooling design. Based on these works, we develop pooling design with small estimation bias of posterior probability, and we show that the balanced incomplete block design (BIBD) has nice property for our purpose. Some numerical experiments show that the bias correction under the BIBD is useful to improve the estimation accuracy.

Sparsity Pattern Recovery in Bernoulli-Gaussian Signal Model by Subhojit Som, Lee C Potter. The abstract reads:
In compressive sensing, sparse signals are recovered from underdetermined noisy linear observations. One of the interesting problems which attracted a lot of attention in recent times is the support recovery or sparsity pattern recovery problem. The aim is to identify the non-zero elements in the original sparse signal. In this article we consider the sparsity pattern recovery problem under a probabilistic signal model where the sparse support follows a Bernoulli distribution and the signal restricted to this support follows a Gaussian distribution. We show that the energy in the original signal restricted to the missed support of the MAP estimate is bounded above and this bound is of the order of energy in the projection of the noise signal to the subspace spanned by the active coefficients. We also derive sufficient conditions for no misdetection and no false alarm in support recovery.

This one paper is about compressed counting: On Practical Algorithms for Entropy Estimation and the Improved Sample Complexity of Compressed Counting with Ping Li. The abstract reads:
Estimating the p-th frequency moment of data stream is a very heavily studied problem. The problem is actually trivial when p = 1, assuming the strict Turnstile model. The sample complexity of our proposed algorithm is essentially O(1) near p=1. This is a very large improvement over the previously believed O(1/eps^2) bound. The proposed algorithm makes the long-standing problem of entropy estimation an easy task, as verified by the experiments included in the appendix.
I also found the following on the interwebs: An improved approach in applying compressed sensing in parallel MR imaging by Wu, B., Watts, R., Millane, R., Bones, P. The abstract reads:
Both parallel MRI (pMRI, [1]) and compressed sensing (CS, [2]) allow image reconstruction from an under-sampled data set. The former exploits data redundancy in a sparse transform domain representation whereas the latter exploits the redundancy in multiple receiver data sets. Some success has already been reported in combining the two methods directly [3]. We report a new approach in which conventional pMRI and CS are cascaded to better exploit the individual strengths of the two methods.
and finally we have Jinwei Gu's thesis entitled: Measurement, Modeling, and Synthesis of Time-Varying Appearance of Natural Phenomena. It is 22MB large. The abstract reads:

Many natural phenomena evolve over time — often coupled with a change in their reflectance and geometry — and give rise to dramatic effects in their visual appearance. In computer graphics, such time-varying appearance phenomena are critical for synthesizing photo-realistic images. In computer vision, understanding the formation of time-varying appearance is important for image enhancement and photometric-based reconstruction. This thesis studies time-varying appearance for a variety of natural phenomena — opaque surfaces, transparent surfaces, and participating media — using captured data. We have two main goals: (1) to design efficient measurement methods for acquiring time-varying appearance from the real world, and (2) to build compact models for synthesizing or reversing the appearance effects in a controllable way. We started with time-varying appearance for opaque surfaces. Using a computer controlled dome equipped with 16 cameras and 160 light sources, we acquired the first database (with 28 samples) of time-and-space-varying reflectance, including a variety of natural processes — burning, drying, decay and corrosion. We also proposed a space time appearance factorization model which disassembles the high-dimensional appearance phenomena into components that can be independently modified and controlled for rendering. We then focused on time-varying appearance of transparent objects. Real-world transparent objects are seldom clean — over time their surfaces will gradually be covered by a variety of contaminants, which produce the weathered appearance that is essential for photorealism. We derived a physically-based analytic reflectance model for recreating the weathered appearance in real time, and developed single-image based methods to measure contaminant texture patterns from real samples.
The understanding of the weathered appearance of transparent surfaces was also used
for removing image artifacts caused by dirty camera lenses. By incorporating priors on natural images, we developed two fully-automatic methods to remove the attenuation and scattering artifacts caused by dirty camera lenses. These image enhancement methods can be used for post-processing existing photographs and videos as well as for recovering clean images for automatic imaging systems such as outdoor security cameras.
Finally, we studied time-varying appearance of volumetric phenomena, such as smoke and liquid. For generating realistic animations of such phenomena, it is critical to obtain the time-varying volume densities, which requires either intensive modeling or extremely high speed cameras and projectors. By using structured light and exploring the sparsity of such natural phenomena, we developed an acquisition system to recover the time-varying volume densities, which is about 4 to 6 times more efficient than simple scanning. From the perspective of computer vision, our method provides a way to extend the applicable domain of structured light methods from 2D opaque surfaces to 3D
I could not find a copy of the following articles but I wish I did (from here):

  • Zhu X, Bamler R (2010):Compressive Sensing for High Resolution Differential SAR Tomography.
  • Zhu X, Bamler R (2010): Super-resolution for 4-D SAR Tomography via Compressive Sensing.
  • Zhu X, Bamler R (2010): Tomographic SAR Inversion by L1 Norm Regularization - The Compressive Sensing Approach.




In this section of the ISMRM-ESMRMB meeting in Stockholm, we have the following abstracts:

k-T Group Sparse Reconstruction Method for Dynamic Compressed MRI
Muhammad Usman1, Claudia Prieto1, Tobias Schaeffter1, Philip G. Batchelor1
1King's College London, London, United Kingdom

Up to now, besides sparsity, the standard compressed sensing methods used in MR do not exploit any other prior information about the underlying signal. In general, the MR data in its sparse representation always exhibits some structure. As an example, for dynamic cardiac MR data, the signal support in its sparse representation (x-f space) is always in compact form. In this work, exploiting the structural properties of sparse representation, we propose a new formulation titled ‘k-t group sparse compressed sensing’. This formulation introduces a constraint that forces a group structure in sparse representation of the reconstructed signal. The k-t group sparse reconstruction achieves much higher temporal and spatial resolution than the standard L1 method at high acceleration factors (9-fold acceleration).

Parallel Imaging Technique Using Localized Gradients (PatLoc) Reconstruction Using Compressed Sensing (CS)
Fa-Hsuan Lin1, Panu Vesanen2, Thomas Witzel, Risto Ilmoniemi, Juergen Hennig3
1A. A. Martinos Center, Charlestown, MA, United States; 2Helsinki University of Technology, Helsinki, Finland; 3University Hospital Freiburg, Freiburg, Germany

The parallel imaging technique using localized gradients (PatLoc) system has the degree of freedom to encode spatial information using multiple surface gradient coils. Previous PatLoc reconstructions focused on acquisitions at moderate accelerations. Compressed sensing (CS) is the emerging theory to achieve imaging acceleration beyond the Nyquist limit if the image has a sparse representation and the data can be acquired randomly and reconstructed nonlinearly. Here we apply CS to PatLoc image reconstruction to achieve further accelerated image reconstruction. Specifically, we compare the reconstructions between PatLoc and traditional linear gradient systems at acceleration rates in an under-determined system.

Here is an announcement for a Workshop on Novel Reconstruction Strategies in NMR and MRI 2010
Göttingen, September 9-11, 2010
Aims and Scope

Several decades after the fundamental discoveries which opened the fields of nuclear magnetic resonance (NMR) spectroscopy and magnetic resonance imaging (MRI), both the underlying instrumentation and respective applications advanced to a fairly mature state, rendering NMR spectroscopy a key method for structural analysis of biomolecules and MRI one of the most important modalities for diagnostic imaging. For MRI, recent developments indicate that further progress can rather be expected from novel mathematical reconstruction techniques than from further advances in technology. At the same time, the application of new mathematical approaches facilitates the recording of NMR spectra with higher spectral resolution and at less measurement time, especially for multi-dimensional NMR spectroscopy. As NMR and MRI share the same physical principles, there are many similarities between the mathematical tools used for their analysis.

The aim of this workshop is to foster interaction of researchers working in NMR, MRI, and regularization of ill-posed problems, starting with tutorials in each of these fields.

Topics include parallel imaging, undersampling strategies, compressed sensing, non-quadratic regularization, nonlinear inversion methods, diffusion-weighted imaging and fiber tractography, parametric as well as nonparametric reconstruction of spectra, and non-uniform sampling in the indirect time domain.



On a different note here is an offer for thesis work in French
Poste Etude dun schéma Compressive sensing applicable aux dispositifs d’acquisition tridimensionnels
Tags: job, phd, research position

Etude d’un schéma Compressive sensing applicable aux dispositifs d’acquisition tridimensionnels
Post-Doc
DeadLine: 31/05/2010
christophe.stolz@u-bourgogne.fr
http://www.le2i.com

Contexte : A l’heure actuelle, les systèmes d’acquisition 3D commerciaux misent sur les performances en vitesse et en résolution. Le résultat est une description 3D qui peut se révéler très redondante en fonction des caractéristiques locales ou globales de l’objet et de l’attente en résolution. Telle surface plane de l’objet sera représentée par des milliers de points qu’il sera nécessaire de réduire à un certain nombre en fonction du maillage choisi a priori.

Objectifs : Nous proposons dans ce projet de valider l’approche dite «compressive sensing» introduite par Candès[1] et Donoho[2] qui révolutionne actuellement le monde des capteurs et où l’on mise plutôt sur la «qualité» de la représentation plutôt que sur la quantité. Cette qualité est obtenue à l’aide de capteurs à transformée: dans notre cas par exemple, ce n’est plus l’information «position d’un point» qui est mesurée mais plutôt une contribution de plusieurs points pondérés par une fonction bien choisie. L’information 3D n’est donc plus obtenue par une simple triangulation 3D point par point. L’application d’un tel procédé trouve par exemple naturellement une application dans le domaine de l’holographie connu comme un principe d’enregistrement basé sur les interférences permettant de “compresser” l’information 3D sous forme d’une information 2D et dont la base de Fourier[3] est un support adapté.

Nous disposons de divers capteurs destinés à l’acquisition 3D : l’un basé sur de l’imagerie polarimétrique, un second basé sur le defocused light (reconstruction 3D à partir d’un pattern lumineux défocalisé), et un dispositif d’holographie numérique.
Nous souhaiterions dans un premier temps étudier à partir des données issues de ces capteurs en quoi celles-ci peuvent ou non être représentées sous forme parcimonieuse afin de reconstituer le signal d’origine. Le travail sera fait à partir des images issues de ces dispositifs.
Il sera ensuite envisagé l’adaptation des principes du compressive sensing à certains de ces dispositifs. Dans le cas du prototype de defocused light par exemple, la difficulté se situe dans le choix de la transformée (réalisée à l’aide d’un motif lumineux particulier) et dans la reconstruction 3D à partir des données acquises qui fera appel à un algorithme non linéaire.

1. E.J. Candes et T. Tao, “Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?” Information Theory, IEEE Transactions on 52 (12), 5406-5425 (2006).
2. D.L. Donoho, “Compressed sensing”, Information Theory, IEEE Transactions on 52 (4), 1289-1306 (2006).
3. D.J. Brady, K. Choi, D.L. Marks, R. Horisaki et S. Lim, “Compressive Holography”, Opt. Express 17 (15), 13040-13049 (2009).

Encadrant : Olivier LALIGANT
Co-encadrant : Christophe STOLZ, Akram ALDROUBI (Université de Vanderbilt)
Financement : Région Bourgogne
Début du post doc : octobre 2010 pour une durée de 1 an.

Tuesday, April 27, 2010

CS: Rank Awareness in Joint Sparse Recovery, CS-MUSIC, Dim reduction of covariance matrices, Blegs from around the blogs, Combinatorial testing.


The last entry on Random Matrix Theory and Compressive Sensing and related subjects, triggered several reactions. First, Mike Davies just sent me the following:

Dear Igor

I would like to draw your attention to a new paper by myself and Yonina Eldar called:


This seems to follow a recent theme in Nuit Blanche regarding sparse Multiple Measurement Vector (MMV) recovery algorithms and the links with array signal processing [Igor's note: see CS: Random Matrix Theory and Compressive Sensing]). I have attached a copy of the paper which should appear on Arxiv tomorrow and also on our Edinburgh Compressed Sensing group website (http://ecos.maths.ed.ac.uk/).

The bottom line is that rank should be taken into account properly in the joint sparse recovery problem!

Specifically we show:

  1. rank based necessary and sufficient conditions for the sparse MMV problem showing rank helps (sufficiency was previously shown by Chen and Huo)
  2. rank based combinatorial search - again showing that rank helps
  3. almost all existing algorithms for joint sparse recovery are rank blind! That is, their worst case performance approaches that of the Single Measurement Vector (SMV) problem.
  4. we present some Rank Aware (RA) algorithms, most notably a RA-version of Order Recursive Matching Pursuit which seems to provide the best interpolation between the SMV case and guaranteed recovery in the full rank case.

I think this is just the beginning of a new flurry of research in MMV recovery algorithms. For example, it would be interesting to discover if there was a convex based joint sparse recovery algorithm that was fully rank aware - current mixed norm techniques are rank blind. Similarly are there Rank Aware versions of CoSaMP or IHT?

All the best

Mike
The full article is: Rank Awareness in Joint Sparse Recovery by Mike Davies, and Yonina Eldar. The abstract reads:
In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation has origins in the field of array signal processing as we discuss in this paper. Inspired by these links, we introduce a new family of MMV algorithms based on the well-know MUSIC method in array processing. We particularly highlight the role of the rank of the unknown signal matrix X in determining the difficulty of the recovery problem. We begin by deriving necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that as the rank of X increases, the computational effort required to solve the MMV problem through a combinatorial search is reduced.
In the second part of the paper we consider practical suboptimal algorithms for MMV recovery. We examine the rank awareness of popular methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such method is a discrete version of the MUSIC algorithm popular in array signal processing. This approach is guaranteed to recover the sparse vectors in the full rank MMV setting under mild conditions. We then extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case. This approach also provides guaranteed recovery in the full rank setting. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Thanks Mike.

On Twitter, Giuseppe Paleologo mentioned:
@igorcarron, for dim reduction of covariance matrices, you can check out the important recent work of A. Onatski http://bit.ly/aEG4pv
Thanks Giuseppe.

Of interest is the following paper: Honey I shrunk the Covariance Matrix, by Olivier Ledoit and
Michael Wolf (from @prometheia ).

Jong Chul Ye just mentioned to me that the paper is out now: Compressive MUSIC: A Missing Link Between Compressive Sensing and Array Signal Processing by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye.The abstract reads:
Multiple measurement vector (MMV) problem addresses identification of unknown input vectors that share common sparse support sets, and has many practical applications. Even though MMV problems had been traditionally addressed within the context of sensory array signal processing, recent research trend is to apply compressive sensing (CS) theory due to its capability to estimate sparse support even with insufficient number of snapshots, in which cases classical array signal processing approaches fail. However, CS approaches guarantees the accurate recovery of support in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The main contribution of the present article is, therefore, a unified approach that unveils a {missing link} between compressive sensing and array signal processing approaches for the multiple measurement vector problem. The new algorithm, which we call {\em compressive MUSIC}, identifies the parts of support using compressive sensing, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using large system MMV model, we theoretically show that our compressive MUSIC requires smaller number of sensor elements for accurate support recovery compared to the existing compressive sensing and approaches the optimal $l_0$-bound when sufficient but finite number of snapshots are available. We also derive an explicit form of minimum signal-to-noise (SNR) expression that is required for the success of generalized MUSIC criterion. The expression clearly indicates the dependency on the unknown source distribution. Finally, we show that unknown sparsity level can be automatically estimated. We conclude the paper with numerical results that demonstrate that compressive MUSIC outperforms the conventional algorithms.
Thanks Jong.

Around the blogs, we have three authors blegging for an answer:

Sarah is asking a question in:
Alex Gittens is asking another in:
And Bob Sturm is asking several in:
In Ben Recht's class there was an interesting presentation on Bug hunting and Compressed Sensing by Suhail Shergill. Not much detail there, but it reminded me that maybe a CS-Group Tesitng type of approach might decrease the need for combinatorial testing in software (see also R. Kuhn, R. Kacker, Y. Lei, J. Hunter, "Combinatorial Software Testing", from here.). On a related note, instead of looking for a defective element, maybe CS-Group Testing type of method can be used efficiently in combinatorial chemistry or biology.



Five days ago there was a talk at Berkeley which sounded interesting:

Adaptive Data Analysis via Nonlinear Compressed Sensing

Seminar | April 22 | 2-3 p.m. | 740 Evans Hall

Speaker: Tom Hou, Caltech [who I thought was mostly into navier-stockes]

Sponsor: Mathematics, Department of

We introduce an Instantaneous Fourier Analysis method to analyze multiscale nonlinear and non-stationary data. The purpose of this work is to find the sparsest representation of a multiscale signal using basis that is adapted to the data instead of being prescribed a priori. Using a variation approach base on nonlinear L1 optimization, our method defines trends and Instantaneous Frequency of amultiscale signal. One advantage of performing such decomposition is to preserve some intrinsic physical property of the signal without introducing artificial scales or harminics. For multiscale data that have a nonlinear sparse representation, we prove that our nonlinear optimization method converges to the exact signal with the sparse representation. Moreover, we will show that our method is insensitive to noise and robust enough to apply to realistic physical data. For general data that do not have a sparse representation, our method will give an approximate decomposition and the accuracy is controlled by the scale separation property of the original signal.



Image Credit: NASA/JPL/Space Science Institute, Titan.

Saturday, April 24, 2010

CS: Random Matrix Theory and Compressive Sensing

Following up on the recent entry on Random Matrix Theory and a new blog entry entitled Bad News for PCA by Sarah, I initially used some of the following references to get a sense on how RMT is used. An interesting one is given in:


Financial Applications of Random Matrix Theory: a short review by Jean-Philippe Bouchaud, Marc Potters. The abstract reads:
We discuss the applications of Random Matrix Theory in the context of financial markets and econometric models, a topic about which a considerable number of papers have been devoted to in the last decade. This mini-review is intended to guide the reader through various theoretical results (the Marcenko-Pastur spectrum and its various generalisations, random SVD, free matrices, largest eigenvalue statistics, etc.) as well as some concrete applications to portfolio optimisation and out-of-sample risk estimation.

On March 1, Marc made a presentation explaining in some basic fashion the reason why RMT is of interest in the finance business:




One of the thing that struck me was that it was actually relevant to some of the normalization we do in machine learning which may or may not be useful. In essence, when you perform a normalization in diffusion geometries, you are removing some information about what some of the modes mean. But coming back to compressive sensing and its connection to RMT, in light of the use of RMT in array signal processing ( Fundamental limit of sample generalized eigenvalue based detection of signals in noise using relatively few signal-bearing and noise-only samples by Raj Rao Nadakuditi, Jack Silverstein) let me wonder aloud how this type of study could be connected by to compressive sensing array signal processing as featured this past week:

Roman Vershynin has some slides on the interrelationship between Random Matrix Theory and Compressive Sensing:
and has just released an article entitled: How close is the sample covariance matrix to the actual covariance matrix? by Roman Vershynin. The abstract reads:
Given a distribution in Rn, a classical estimator of its covariance matrix is the sample covariance matrix obtained from a sample of N independent points. What is the optimal sample size N = N(n) that guarantees estimation with a xed accuracy in the operator norm? Suppose the distribution is supported in a centered Euclidean ball of radius O(pn). We conjecture that the optimal sample size is N = O(n) for all distributions with nite fourth moment, and we prove this up to an iterated logarithmic factor. This problem is motivated by the optimal theorem of M. Rudelson [21] which states that N = O(n log n) for distributions with nite second moment, and a recent result of R. Adamczak et al. [1] which guarantees that N = O(n) for sub-exponential distributions.

Other References:
M. Rudelson, R. Vershynin, Non-asymptotic theory of random matrices: extreme singular values,
R. Vershynin, Approximating the moments of marginals of high dimensional distributions.
R. Vershynin, On the role of sparsity in Compressed Sensing and Random Matrix Theory, Commentary.

Friday, April 23, 2010

Octopus smarts

In the past week, everybody has been in amused by the video of an octopus picking up a camera and "running" with it. Here is the video:



But in terms of being in awe, you should really check this subject extracted from the TV show Thalassa (a show on french television that is focused on sea related matters). I cannot seem to embed the video here so here is the address :

It will be on the website for a little while before it is removed from the site. The story telling is in French (so you may want to reduce the volume). The actual octopus part of the show starts at 55 minutes and 5 seconds. It is just amazing.

To give you an idea,
  • At 57:15, you can see an Octopus "walk" between different pools to follow humans or go get some food in the other pools.
  • At 59:05 you'll see an Octopus learn how to unscrew a can cap on his own.
  • The most amazing to me is the different shape it can take (at 1h19:10 they show different shapes and colors it can take in different circumstances).
  • At 1h37 you can see David Edelman doing some experimentation on the peer-learning process of an octopus
  • and more...

Being totally fascinated by the subject, I asked David when he would publish on this, he responded with:

Thanks for your email, as well as interest in our work. The studies with Dr. Fiorito in Naples are ongoing; we hope to publish later in the year. I'll keep you apprised of our progress.


I can't wait!

CS: Compressive MUSIC, PSF recovery, Online Sparse System Identification, MMSE and MAP Denoising, other news



Jong Chul Ye just sent me the following:

We believe that our work has finally addresses the open problem for joint sparsity.

Thanks Jong. Looks like MUSIC should have some bearing on some of the blind deconvolution/ calibration problem faced in some compressive sensing hardware implementations.

Following Tuesday's entry, I sent the following to Chinmay Hegde
I just read your latest paper entitled: Sampling and Recovery of Pulse Streams and was wondering when reading figure 6 what the PSF looked like ? and if you had compared it to that of the camera that took this shot from the Hubble telescope ?

Also do you intend on releasing a code version of algorithm 2 at some point in time ?

We got the test image off the web, so I'm not quite sure of the imaging parameters of the camera that actually took that shot.

Chin kindly responded with:
In our recovery algorithm, we eyeballed the PSF size to be about 5x5, and used this as a parameter for our recovery algorithm. The "true" PSF is likely of a different size, thereby resulting in model mismatch. It's unclear as of now how to effectively deal with this.

As for the code: the key elements of the code can be found in the model-based compressive sensing toolbox:
http://dsp.rice.edu/software/model-based-compressive-sensing-toolbox
especially for performing the model-approximation step (for 1D signals) in Algorithm 2. We hope to update the toolbox with the pulse stream recovery algorithm soon.

Thanks Chin.


Yannis Kopsinis let me know that I did not catch his paper on arxiv as my query looks for a lot of words in the abstract but none of the ones he and his co-author used:


This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies "data mismatch". Sparsity is imposed by the introduction of appropriately designed weighted $\ell_1$ balls. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted $\ell_1$ balls. The resulting scheme exhibits linear dependence, with respect to the unknown system's order, on the number of multiplications/additions and an $\mathcal{O}(L\log_2L)$ dependence on sorting operations, where $L$ is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the LASSO algorithm and two very recently developed adaptive sparse LMS and LS-type of adaptive algorithms, which are considered to belong to the same algorithmic family.

Thanks Yannis

My crawler found the following:

On MMSE and MAP Denoising Under Sparse Representation Modeling Over a Unitary Dictionary by Javier Turek, Irad Yavneh, Matan Protter, Michael Elad. The abstract reads:

Among the many ways to model signals, a recent approach that draws considerable attention is sparse representation modeling. In this model, the signal is assumed to be generated as a random linear combination of a few atoms from a pre-specified dictionary. In this work we analyze two Bayesian denoising algorithms -- the Maximum-Aposteriori Probability (MAP) and the Minimum-Mean-Squared-Error (MMSE) estimators, under the assumption that the dictionary is unitary. It is well known that both these estimators lead to a scalar shrinkage on the transformed coefficients, albeit with a different response curve. In this work we start by deriving closed-form expressions for these shrinkage curves and then analyze their performance. Upper bounds on the MAP and the MMSE estimation errors are derived. We tie these to the error obtained by a so-called oracle estimator, where the support is given, establishing a worst-case gain-factor between the MAP/MMSE estimation errors and the oracle's performance. These denoising algorithms are demonstrated on synthetic signals and on true data (images).



In other news:

We will first present a new class of model-based learning methods which include hypergraph and structured sparse learning for vision understanding. In our hypergraph framework, a hyperedge is defined by a set of vertices with similar attributes. The complex relationship between the mages can be easily represented by different hyperedges according to different visual cues. We applied unsupervised and semi-supervided hypergraph learning to video object segmentation and image retrieval. Extensive experiments demonstrate its advantages over traditional graph models. Our structured sparsity framework is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. We will show applications in sparse learning, compressive sensing, computer vision and medical imaging. Time permitting we will show applications of stochastic deformable models to facial tracking, body tracking, ASL recognition and cardiac motion analysis/simulation.
While conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region of interest (ROI). However, such a known sub-region is not always readily available, and it is even impossible to find in some cases. Based on compressed sensing theory, herewe prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in computed tomography (CT) applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction for interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image and evaluate the performance in numerical simulation.
The limited angle problem is a well-known problem in computed tomography. It is caused by missing data over a certain angle interval, which make an inverse Radon transform impossible. In daily routine this problem can arise for example in tomosynthesis, C-arm CT or dental CT. In the last years there has been a big development in the field of compressed sensing algorithms in computed tomography, which deal very good with incomplete data. The most popular way is to integrate a minimal total variation norm in form of a cost function into the iteration process. To find an exact solution of such a constrained minimization problem, computationally very demanding higher order algorithms should be used. Due to the non perfect sparsity of the total variation representation, reconstructions often show the so called staircase effect. The method proposed here uses the solutions of the iteration process as an estimation for the missing angle data. Compared to a pure compressed sensing-based algorithm we reached much better results within the same number of iterations and could eliminate the staircase effect. The algorithm is evaluated using measured clinical datasets.


Credit: NASA, This compilation of video shows some of the first imagery and data sent back from NASA's Solar Dynamics Observatory (SDO). Most of the imagery comes from SDO's AIA instrument, and different colors are used to represent different temperatures, a common technique for observing solar features. SDO sees the entire disk of the Sun in extremely high spacial and temporal resolution and this allows scientists to zoom in on notable events like flares, waves, and sunspots.

Tuesday, April 20, 2010

CS: A symposium, Kronecker products and a stupid question, Reed-Muller Sieve, iMUSIC, Pulse Streams, A No-free-lunch Theorem, PCA




Jeff Blanchard just sent me the following;
Dear Igor,

I wonder if you could advertise this symposium on sparse approximation and compressed sensing I am organizing at ICNAAM. Here's the link to the page for the symposium


and the link to the conference


If you don't advertise symposiums or sessions, I certainly understand, but you are absolutely the most effective method of advertising anything on compressed sensing.

From the website:

19–25 September 2010 Rhodes, Greece

The ICNAAM symposium on sparse approximation and compressed sensing will bring researchers from all stages of their career together to discuss the theory and applications of exploiting sparsity in signal processing, information theory, and statistics. Topics will include, but are not strictly limited to,

  • Sparse coding, vector quantization and dictionary learning
  • Sparse approximation algorithms
  • Compressed sensing
  • Analog/Continuous Compressed Sensing
  • Sparse/structured signal representations
  • Compression and coding
  • Feature extraction, classification, detection
  • Sparsity measures in approximation theory, information theory and statistics
  • Applications

Speakers (as of 10 April 2010)

  • Jeff Blanchard, Grinnell College and University of Edinburgh
  • Charles Dossal, Université Bordeaux 1
  • Miki Elad, Technion
  • Yonina Eldar, Technion
  • Anders Hansen, Cambridge University
  • Blake Hunter, University of California, Davis
  • Deanna Needell, Stanford University
  • Ozgur Yilmaz, University of British Columbia

Call for abstracts

To submit an abstract, email an extended abstract of 2–3 pages to Jeff Blanchard (jeff@math.grinnell.edu). Abstracts will be reviewed on an ongoing basis. Abstracts submitted after 22 July 2010 will not be considered. You are encouraged to email your intention to submit an abstract as soon as possible. Preparation guidelines and templates are available.

Registration

When your abstract is accepted, please register for the conference. There are three registration deadlines with increasing registration fees:
Early (30 April), Regular (15 June), Late (29 July).
Please note that, according to the conference registration page, if you register and your abstract is not accepted for publication in the proceedings, the registration fee will be refunded.



Thanks Jeff.

Ever since the mention of the Kronecker products by Yin Zhang and its mention in hardware concepts discussed here on this blog I am looking forward to more coverage on this topic. Today is the day with: Kronecker Product Matrices for Compressive Sensing by Marco Duarte and Richard Baraniuk. The abstract reads:
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While CS literature has mostly focused on problems involving 1-D and 2-D signals, many important applications involve signals that are multidimensional. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed measurement settings can be easily expressed as Kronecker products. This new formulation enables the derivation of analytical bounds for sparse approximation and CS recovery of multidimensional signals.

Here are the attendant slides. But as some of you know, I am a little dim, so I did not hesitate to ask the stupid question to Marco:
How do you implement it ?
Marco was kind enough to answer:

Thanks for your email. It's a good question. The nice thing about the Kronecker product measurements is that they can be implemented "separately", as follows:
  • first, apply one measurement matrix Phi1 on all sections of the multidimensional data x along one dimension,
  • then, stack the measurements obtained for each section along the first dimension to obtain a multidimensional measurement y1,
  • then, apply the next measurement matrix Phi2 on the sections along the next dimension on the previously obtained measurements y1,
  • then, stack the measurements obtained for each section along the second dimension to obtain a new multidimensional measurement y2,
  • repeat until done with dimensions and vectorize the resulting y_D to obtain final measurements y.
This is equivalent to applying the Kronecker product of the measurement matrices on an appropriately vectorized version of the data.
For example: the hyperspectral single pixel camera applies the same projection matrix Phi1 on all the spectral bands of the hyperspectral datacube being acquired. Each band then provides a vector of measurements, which we stack into a matrix y1 (say, as rows). We then apply the second projection matrix Phi2 on each one of the columns of the matrx-stacked measurements y1, which effectively *mixes* the measurements coming from the different bands, into a new matrix of measurements y2, and then vectorize y2 to get y.
Hopefully this makes things clear? The idea is the same for the sparsity bases (apply transformations on one dimension at a time, obtained the coefficients in the Kronecker basis at the end).
Let me know if I can help further.
Thanks Marco.

The arxiv search produced three new papers and then some today: Sparse Reconstruction via The Reed-Muller Sieve by Robert Calderbank, Stephen Howard, Sina Jafarpour. The abstract reads:
This paper introduces the Reed Muller Sieve, a deterministic measurement matrix for compressed sensing. The columns of this matrix are obtained by exponentiating codewords in the quaternary second order Reed Muller code of length $N$. For $k=O(N)$, the Reed Muller Sieve improves upon prior methods for identifying the support of a $k$-sparse vector by removing the requirement that the signal entries be independent. The Sieve also enables local detection; an algorithm is presented with complexity $N^2 \log N$ that detects the presence or absence of a signal at any given position in the data domain without explicitly reconstructing the entire signal. Reconstruction is shown to be resilient to noise in both the measurement and data domains; the $\ell_2 / \ell_2$ error bounds derived in this paper are tighter than the $\ell_2 / \ell_1$ bounds arising from random ensembles and the $\ell_1 /\ell_1$ bounds arising from expander-based ensembles.

We propose a robust and efficient algorithm for the recovery of the jointly sparse support in compressed sensing with multiple measurement vectors (the MMV problem). When the unknown matrix of the jointly sparse signals has full rank, MUSIC is a guaranteed algorithm for this problem, achieving the fundamental algebraic bound on the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank deficiency or bad conditioning. This situation arises with limited number of measurements, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing MMV methods can consistently approach the algebraic bounds. We propose iMUSIC, which overcomes these limitations by combining the advantages of both existing methods and MUSIC. It is a computationally efficient algorithm with a performance guarantee.


Compressive Sensing (CS) is a new technique for the efficient acquisition of signals, images, and other data that have a sparse representation in some basis, frame, or dictionary. By sparse we mean that the N-dimensional basis representation has just K lt lt N significant coefficients; in this case, the CS theory maintains that just M = K log N random linear signal measurements will both preserve all of the signal information and enable robust signal reconstruction in polynomial time. In this paper, we extend the CS theory to pulse stream data, which correspond to S-sparse signals/images that are convolved with an unknown F-sparse pulse shape. Ignoring their convolutional structure, a pulse stream signal is K=SF sparse. Such signals figure prominently in a number of applications, from neuroscience to astronomy. Our specific contributions are threefold. First, we propose a pulse stream signal model and show that it is equivalent to an infinite union of subspaces. Second, we derive a lower bound on the number of measurements M required to preserve the essential information present in pulse streams. The bound is linear in the total number of degrees of freedom S + F, which is significantly smaller than the naive bound based on the total signal sparsity K=SF. Third, we develop an efficient signal recovery algorithm that infers both the shape of the impulse response as well as the locations and amplitudes of the pulses. The algorithm alternatively estimates the pulse locations and the pulse shape in a manner reminiscent of classical deconvolution algorithms. Numerical experiments on synthetic and real data demonstrate the advantages of our approach over standard CS.

Showing up on my radar screen also:

We consider two widely used notions in machine learning, namely: sparsity and algorithmic stability. Both notions are deemed desirable in designing algorithms, and are believed to lead to good generalization ability. In this paper, we show that these two notions contradict each other. That is, a sparse algorithm can not be stable and vice versa. Thus, one has to tradeoff sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ1-regularized regression (Lasso) cannot be stable, while ℓ2-regularized regression is known to have strong stability properties.
and of related interest from the same authors

We consider the dimensionality-reduction problem (finding a subspace approximation of observed data) for contaminated data in the high dimensional regime, where the number of observations is of the same magnitude as the number of variables of each observation, and the data set contains some (arbitrarily) corrupted observations. We propose a High-dimensional Robust Principal Component Analysis (HR-PCA) algorithm that is tractable, robust to contaminated points, and easily kernelizable. The resulting subspace has a bounded deviation from the desired one, achieves maximal robustness, and unlike ordinary PCA algorithms, achieves optimality in the limit case where the proportion of corrupted points goes to zero.

Monday, April 19, 2010

Ash Cloud-General Aviation Science Based Decision Making ?

As some of you have seen, I have taken a particular interest in the current ash cloud over Europe mostly because to the outsider like myself, it really looks like the decision to shut down the whole European airspace is not based on real science. Here are some of the issues that bother me:
  • Several airlines have flown some of their planes with no problems (i.e. they have checked the inside of the combustion chamber of their planes and seen no issues it seems). These airline planes do not have the capability to really sample the part of the sky they went through.
  • I haven't heard any of the airplane and engine manufacturers communicating on these issues.
  • The VAAC is producing maps based on models. However complete these models are, very little seems to be done with regards to comparing the results of these models and actual in-situ observations.
  • I have not seen any convincing evidence from any of the satellite photos (the includes ENVISAT) that we know really where the cloud is altitude- and concentration-wise. Let me be more specific. The European Space Agency has a system in place where the data from satellites are being sent to the VAACs through an SO2 alert list:

    ... In addition to VAACs, the information – derived from the SCIAMACHY instrument on ESA’s Envisat, GOME-2 and IASI on MetOp, OMI on EOS-Aura and AIRS on Aqua – is delivered to volcanological observatories, health care organisations, scientists, etc.


    Cloud-free snow-covered Iceland 22 February

    To know whether aircraft may safely pass under or over volcanic ash clouds and to forecast better the future motion of the clouds, the VAACs need more accurate information on the altitude and vertical size of an ash plume.

    This is the main focus of ESA’s Support to Aviation for Volcanic Ash Avoidance (SAVAA) project which aims to set up a demonstration system able to ingest satellite data and meteorological wind fields, in order to compute the injection height profile of volcanic emissions, using trajectory and inverse modelling. The system can then be implemented into the operational environment of the VAACs.

    Furthermore, the SAVAA project is providing complementary data to the SACS SO2 alerts by developing volcanic ash alert services for VAACs based on satellite data measured in the infrared part of the spectrum.,,,
    while the SO2 concentration is interesting, one notes the different level of coverage and one wonders how the different maps are being reconciled (between different instruments). One also wonders how these maps are being integrated into the graphics of the VAACs because I don't see a reasonable connection between those and these:
And so, I am eventually curious as to how the current process that closes the airspace of an entire continent is really science based. I am looking for any information where field data are in fact reconciled with satellite data.

Meet Hekla ... or not.

The locals seem to be saying this one is worse, and it just started live or some people say this webcam is pointing at the original volcano. Only time will tell.


The maps of current flights in Europe can be seen here.

Twitter tag: #hekla