Pages

Sunday, May 30, 2010

Compressed Sensing or Inpainting ? Part I





Since the Wired article came out, I and others have been of two minds about what compressive sensing is and how it can be portrayed to a wider audience. First to clear up the subject, I asked Jarvis Haupt the following question:
Dear Jarvis,

I just realized you were the person behind the Obama picture for the Wired article. Could you be kind enough and tell me what operation you did on this example? Right now, I can think of two:
  • Either you took a fourier transform of the Obama picture and then undersampled in the fourier space to get different levels of approximation of the full image
  • or you stayed in the pixel space and you essentially removed pixels in the original image. You then used an l_1 reconstruction with the sensing matrix being really a mask with zeros for those removed pixels.
In the former case, it would not fit too well because the lower resolution picture seems to clearly show missing pixels in the real space. The latter case might look like a more traditional inpainting process....
To what Jarvis kindly responded with:
Hi Igor,

Thanks for getting in touch. We did indeed use the second method you described for the example -- we randomly deleted pixels in the original image to obtain the "compressed" representation, then reconstructed assuming sparsity in a wavelet basis. The intermediate images were obtained essentially by using large(r) tolerance parameters for the solver. If you're interested in playing around with the code, I have posted it on under the "News" heading on my website (www.ece.rice.edu/~jdh6).

It is worth noting, as you point out, that what we did for this example fundamentally amounts to a kind of inpainting process. In that sense, there are a variety of existing techniques that could be employed for this kind of recovery task. Our goal here was only to provide an illustrative visual example of CS for the intended (diverse) audience.

Best regards,
Jarvis

The code used is here. It makes use of the GPSR code and the wavelet library at Rice. Thanks Jarvis for making the code available. Equivalently, Laura Balzano made available the code for the compressive sensing audio examples featured in the NPR piece.

Friday, May 28, 2010

CS: Presentations of the Workshop on Probability and Geometry in High Dimensions

Djalil Chafaï, Olivier Guédon, Guillaume Lecué and Alain Pajor just made available the presentations given at the Workshop on Probability and Geometry in High Dimensions that took place a week or so ago in Marne-la-Vallee, near Paris. From the main site, here is the list of speakers and their presentations:



Congratulations to the organizers for following through by posting the presentations on the site. I note that the presentation by Jared Tanner ( Random matrix theory and stochastic geometry in compressed sensing ) is easier to understand than it used to be. Maybe I am getting accustomed to the subject. Other items that jumped at my face was the bound in slide 61 of Holger Rauhut's presentation (Compressive Sensing and Structured Random Matrices): 57, I don't think I remember that number, wow, certainly if 57 is the smallest bound then this Legendre expansion thing is not going to take off in neutron transport problems. It might do a good job in atmospheric transport problems instead.

CS: A Unified Approach for Minimizing Composite Norms and more.

We started the week with a preprint that expands the study of the Donoho-Tanner phase transition to the noisy case for l_1 recovery. Today, we close the week with another work expanding the recent work on Stable Principal Component Pursuit by Zihan Zhou, Xiaodong Li, John Wright, Emmanuel Candes, and Yi Ma. I found it a little bit by accident as my arxiv search did not find it. It is: A Unified Approach for Minimizing Composite Norms by Necdet Serhat Aybat and Garud Iyengar. The abstract reads:
We propose a first-order augmented Lagrangian algorithm (FALC) that solves min{mu1|X|_* + mu2 |C(X)-d|_1 : A(X)=b}, where C(.) and A(.) denote linear operators from Re^{m*n} to a vector space. FALC solves this semidefinite problem by inexactly solving a sequence of problems of the form min{lambda(k)(mu1 |X|_* + mu2 |s|_1)+|A(X)-b-lambda(k)theta1(k)|_2^2+|C(X)+s-d-lambda(k)theta2(k)|_2^2}, for an appropriately chosen sequence of multipliers {lambda(k),theta1(k),theta2(k)}. Each of these subproblems are solved using Algorithm 3 in [31] by Paul Tseng wherein each update can be computed using singular value decomposition (SVD). We show that FALC converges to the optimal solution X* of the composite norm minimization problem if the optimal solution is unique. We also show that there exists a priori fixed sequence {lambda(k)} such that for all epsilon>0, iterates X(k) computed by FALC are epsilon-feasible and epsilon-optimal after O(1/epsilon) iterations, where the complexity of each iteration is O(min{n*m^2,m*n^2}). We also show that FALC can be extended very simply to solve more general problems of the form: min{mu1 |X|_{alpha}+ mu2 |C(X)-d|_{beta} + : A(X)=b, Q-F(X) is psd, |G(X)-h|_{gamma} <= rho}, where the matrix norm |X|_{alpha} denotes either the Frobenius, the nuclear, or the L_2-operator norm, the vector norms |C(X)-d|_{beta}, and |G(X) - h|_{gamma} denote either the L2-norm, L1-norm or the L{infty}-norm, and A(X), C(X), G(X) and F(X) are linear operators from Re^{m*n} to vector spaces of appropriate dimensions and psd is the set of positive semidefinite matrices. All the convergence properties of FALC continue to hold for this more general problem.
Garud Iyengar lets me know that FALC should be available at some point in the future.

In this blog entry entitled Sony Shrinks CCD Pixel to 1.43um, one can see how Sony is making its pixels smaller: by adding a lens!
In another entry entitled Aptina Explains its BSI-FSI Strategy, Announces New Sensors one can see how the requirements for video and stills are eventually diverging.

While those information are not strictly related to CS, I believe they are good rules of thumbs for trying to design "exotic" systems.

In another direction, Andy Stove at Thales will give a talk entitled: Compressive sampling of radar and electronic warfare data on Friday June 4th at 10:00 Industrial and Interdisciplinary Workshops DH 1st floor SR at the University of Oxford. The abstract of the talk reads:
'Compressive sampling' is a topic of current interest. It relies on data being sparse in some domain, which allows what is apparently 'sub Nyquist' sampling so that the quantities of data which must be handled become more closely related to the information rate. This principal would appear to have (at least) three applications for radar and electronic warfare:
The most modest application is to reduce the amount of data which we must handle: radar and electronic warfare receivers generate vast amounts of data (up to 1Gbit/second or even 10Gbit.sec). It is desirable to be able to store this data for future analysis and it is also becoming increasingly important to be able to share it between different sensors, which, prima facie, requires vast communication bandwidths and it would be valuable to be able to find ways to handle this more efficiently.
The second advantage is that if suitable data domains can be identified, it may also be possible to pre-process the data before the analogue to digital converters in the receivers, to reduce the demands on these critical components.
The most ambitious use of compressive sensing would be to find ways of modifying the radar waveforms, and the electronic warfare receiver sampling strategies, to change the domain in which the information is represented to reduce the data rates at the receiver 'front ends', i.e. make the data at the front end better match the information we really want to acquire.
The aim of the presentation will be to describe the issues with which we are faced, and to discuss how compressive sampling might be able to help. A particular issue which will be raised is how we might find domains in which the data is sparse.

Thursday, May 27, 2010

"You asked me if I have a God complex ?"

As I was reading the following comment extracted from the NPR blog entry on Compressive Sensing I could not help but think how some highly paid technicians can make themselves so pompous just because what they are doing is important, not because they have a deeper understanding of how things work. That is probably the reason TV series like House are having so much success: we collectively crave for the intelligent bastard to get the right diagnostic...at least once, even if it is in the alternate reality of TV. The comment reads:
I am a Pediatric ICU MD. Lets say that I am caring your your child, Ms. Norris, and I feel she is improved but would like an MRI to be certain all is well. I offer you 2 choices for this test for your little darling. 1. A shorter MRI in which much of the image is a "guess" (the word used in the story) or 2. A longer test in which the image is actually that of your child. Which would you choose?

I think this is why it is important to define "guess" and why we collectively need to make a more obvious difference between Compressive Sensing reconstructions and inpainting: In the first type of reconstruction, acquisition is performed in a manner were incoherence is key ( for MRI, sampling is performed in the Fourier space while imaging natural scenes that are sparse in wavelet/curvelets .., for audio, sampling is performed in the real space while the signal is sparse in the Fourier space) whereas in the second reconstruction, one is indeed guessing some elements that have been missing in the coherent acquisition. In the second reconstruction case, the statement of the commenter is indeed worthy of concern and has been the subject of a previous entry. However, in the case of MRI the former applies. Going back to the comment, I could not but remember this Alec Baldwin's jaw breaking monologue featured below. I am using the words jaw breaking because when you watch the movie for the first time, the tirade is totally unexpected, a little bit like when you have lived your whole life thinking Nyquist was the only game in town and compressive sensing barges in .




Similarly, would CT specialists also have the same reaction if one were to mention to them that the Fast Back Projection algorithm is not optimal (see Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?) ? I am thinking yes but that's just me.

Around the blogs in 80 hours

Yesterday, I mentioned the NPR piece and its supporting website showing some examples of Compressive Sensing in Audio. Well, Bob Sturm read it and wrote about it in "Compressed Sensing" and Undersampled Audio Reconstruction. Go read it, I'll wait....It should always be remembered that the concept of incoherent sampling is a central one in compressed sensing (beyond RIP and so forth..). This is why I wrote this example in Compressed Sensing: How to wow your friends back in 2007 that features an example with delta and sines. Let us note that in Compressive Sensing Audio, for the delta/sines assumption to hold, you really need have to sample enough within time-localized phenomena. In other words, Compressed Sensing is not a license to make appear something you did not catch with the sampling process. This needs to be said often, as in MRI or steady-state audio, the signal is being sampled with diracs in their appropriate phase spaces (Fourier for MRI and time for Audio) that will get you a result directly applicable by a compressive sensing approach. In other fields like imagery however, you do not sample directly in a good phase space and you need new types of exotic hardware to perform these new types of incoherent measurements.

Other entries of interest and relevant to CS included:

Of interest is the following page: Noise, Dynamic Range and Bit Depth in Digital SLRs


Credit photo: Thierry Legault, via the Bad Astronomy blog. The ISS and Atlantis in transit with the Sun.

Wednesday, May 26, 2010

Compressed Sensing on NPR's All Tech Considered and more.

Jordan Ellenberg mentioned his interview on NPR's All Tech Considered. For those of you not in the US, NPR stands for National Public Radio and is considered a very serious radio station that provides thoughtful shows. The Big Picture is featured on the blog of All Tech Considered senior producer Art Silverman. Woohoo.



Laura Balzano's site illustrating different audio examples can be found here. The page has been added to the big picture in compressive sensing. I have had a similar experience recently, i.e. I tried to explain compressive sensing quickly to a friend. The group testing story was a very nice and efficient way of approaching the subject. My friend got it almost instantaneously.

On a different note, The LinkedIn group on Compressive Sensing has 399 members, who is going to be the 500th ?

Here is a recent discussion that occurred in the group. For info, the latest news of that group features the RSS feed on the blog.

Amit Ashok let me know that a non-paywall version of Compressive light field imaging is available from his site. Thanks Amit.

This one just appeared on arxiv:



Bayesian Cramér-Rao Bound for Noisy Non-Blind and Blind Compressed Sensing by Hadi Zayyani, Massoud Babaie-Zadeh, Christian Jutten. The abstract reads:
In this paper, we address the theoretical limitations in reconstructing sparse signals (in a known complete basis) using compressed sensing framework. We also divide the CS to non-blind and blind cases. Then, we compute the Bayesian Cramer-Rao bound for estimating the sparse coefficients while the measurement matrix elements are independent zero mean random variables. Simulation results show a large gap between the lower bound and the performance of the practical algorithms when the number of measurements are low.
My question is: why was this article rejected ?

Tuesday, May 25, 2010

CS: The readers' corner, GPU/Multicore for SPIRiT, Lower Bounds in Sparse Recovery, Spectral microscopy and compressive millimeter-wave holography.

The bad news is Phoenix won't revive itself after the Martian winter. It looks like the solar panels did not make it. The good news is we have many readers who wanted to share their contribution with the rest of the community, here we go:

Miki Lustig, now at Berkeley sent me the following:
Hi Igor,

FYI, my student Mark Murphy has just released a GPGPU + multicore implementation of our compressed sensing parallel MRI code.

It is linked http://www.eecs.berkeley.edu/~mlustig/Software.html

-- Miki
..

For more information on SPIRiT refer to this paper M. Lustig and J. Pauly “SPIRiT: Iterative Self-Consistent Parallel Imaging Reconstruction from Arbitrary k-Space Sampling”, Magnetic Resonance in Medicine, 2010, In Press. The L1SPIRiT page is here. From the introduction of the page:

Michael Lustig's l1-SPIRiT is a robust, iterative algorithm for artifact-free reconstruction of MR images acquired with Parallel Imaging and Compressed Sensing acceleration. In the MRI community, there is some reluctance to adopt iterative approaches to image reconstruction due to the computationally intense nature of the algorithms. This software package demonstrates that the recent "manycore" processors, in particular General-Purpose GPUs and multi-core CPUs, provide more than enough computational throughput to make iterative reconstruction algorithms feasible.


Thanks Miki. I'll add that to the compressive sensing hardware page (in the category where GPUs are used for reconstruction).


Mergen Nachin a fourth year undergraduate student at MIT double majoring in Computer Science and Mathematics just released his Undergraduate Advanced Project (undergraduate "thesis") supervised by Piotr Indyk, entitled: Lower Bounds on the Column Sparsity of Sparse Recovery Matrices. Beyond, this interesting news, I believe this is the first time we have somebody from Ulan Bator as a contributor to the field of compressive sensing. This is awesome. Here is the thesis: Lower Bounds on the Column Sparsity of Sparse Recovery Matrices by Mergen Nachin. The abstract reads:

It was recently shown that the following algorithms [BGI+08], [BI09] can approximately recover n-dimensional signal x from its sketch Ax, where the sketch length is O(k log(n/k)) and the column sparsity of A is O(log(n/k)). Our main goal in this report is to show that this column sparsity bound is tight when A is an m x n matrix with m = O(k log(n/k)).
Thanks Mergen for the heads-up and congratulations on your degree.


Christy Fernandez-Cull at Duke, sent me the following:

Hey Igor,

... I noticed on your blog that you mention work from a CS workshop at Duke related to "Image segmentation with a compressive snapshot spectral imager." Just wanted to let you know that this work was also conducted in the Duke Imaging and Spectroscopy Program and that other people were also indicated on the poster (Kerkil Choi and David Brady). Also, in case you're interested a paper related to that work was recently released - "Identification of fluorescent beads using a coded aperture snapshot spectral imager"
http://www.opticsinfobase.org/abstract.cfm?uri=ao-49-10-B59
On a tangential note, a millimeter-wave application to compressive holography has also been recently published - "Millimeter-wave compressive holography"
http://www.opticsinfobase.org/abstract.cfm?uri=ao-49-19-E67

I enjoy your blog.

Have a great week.
The paper being refered to is: Identification of fluorescent beads using a coded aperture snapshot spectral imager by Christy Fernandez-Cull, Kerkil Choi, David J. Brady, and Tim Oliver. The abstract reads:

We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.

the other one is behind a paywall: Millimeter-wave compressive holography by Christy Fernandez-Cull, David Wikner, Joseph Mait, Michael Mattheiss, and David Brady. The abstract reads:
We describe an active millimeter-wave holographic imaging system that uses compressive measurements for three-dimensional (3D) tomographic object estimation. Our system records a two-dimensional (2D) digitized Gabor hologram by translating a single pixel incoherent receiver. Two approaches for compressive measurement are undertaken: nonlinear inversion of a 2D Gabor hologram for 3D object estimation and nonlinear inversion of a randomly subsampled Gabor hologram for 3D object estimation. The object estimation algorithm minimizes a convex quadratic problem using total variation (TV) regularization for 3D object estimation. We compare object reconstructions using linear backpropagation and TV minimization, and we present simulated and experimental reconstructions from both compressive measurement strategies. In contrast with backpropagation, which estimates the 3D electromagnetic field, TV minimization estimates the 3D object that produces the field. Despite undersampling, range resolution is consistent with the extent of the 3D object band volume.

I also found: Sparse Fourier Sampling in Millimeter-Wave Compressive Holography by Christy Fernandez, David Brady, Joseph N. Mait, and David A. Wikner. The abstract reads:

We analyze the impact of sparse sampling on millimeter-wave (MMW) two-dimensional (2-D) holographic measurements for three-dimensional (3-D) object reconstruction. Simulations address 3-D object estimation efficacy. We present 3-D object reconstructions from experimental data.
While looking around, Google also found Christy's recently released PhD thesis :


Computational spectral microscopy and compressive millimeter-wave holography. The abstract reads:
This dissertation describes three computational sensors. The first sensor is a scanning multi-spectral aperture-coded microscope containing a coded aperture spectrometer that is vertically scanned through a microscope intermediate image plane. The spectrometer aperture-code spatially encodes the object spectral data and nonnegative least squares inversion combined with a series of reconfigured two-dimensional (2D spatial-spectral) scanned measurements enables three-dimensional (3D) (x, y, λ) object estimation. The second sensor is a coded aperture snapshot spectral imager that employs a compressive optical architecture to record a spectrally filtered projection of a 3D object data cube onto a 2D detector array. Two nonlinear and adapted TV-minimization schemes are presented for 3D (x,y,λ) object estimation from a 2D compressed snapshot. Both sensors are interfaced to laboratory-grade microscopes and applied to fluorescence microscopy. The third sensor is a millimeter-wave holographic imaging system that is used to study the impact of 2D compressive measurement on 3D (x,y,z) data estimation. Holography is a natural compressive encoder since a 3D parabolic slice of the object band volume is recorded onto a 2D planar surface. An adapted nonlinear TV-minimization algorithm is used for 3D tomographic estimation from a 2D and a sparse 2D hologram composite. This strategy aims to reduce scan time costs associated with millimeter-wave image acquisition using a single pixel receiver.

Thanks Christy, and congratulations on your Ph.D.

Finally, a reader of the blog sent me the following:

Hello Igor,
Thank you for sharing your codes with the world people.
I wish to build a measurement matrix by scrambled block hadamard ensemble. I use the code sbhe.m . It always shows :
Error using ==> cat CAT arguments dimensions are not consistent Error in ==> sbhe at 37 v2 = cat(2,vv,round(1+rand(mt,1)*(ntotal-1)));
I don't know what's wrong.
Attached is the code I downloaded from your homepage.
Can you help me to solve this problem? Thank you.


I don't know what is wrong either, last time I used it, it worked, so I really need to have the parameters used to get this error otherwise I am spending too much time on something that is not really what I want to do for the day. In any case, if somebody has a better implementation or can find the "bug" please come forward and I'll advertize it on the blog.

Image credit: NASA/JPL-Caltech/University of Arizona. Two images of the Phoenix Mars lander taken from Martian orbit in 2008 and 2010. The 2008 lander image shows two relatively blue spots on either side corresponding to the spacecraft's clean circular solar panels. In the 2010 image scientists see a dark shadow that could be the lander body and eastern solar panel, but no shadow from the western solar panel.

Sunday, May 23, 2010

CS: The long post of the week: CS over the Grassmann Manifold, RMT,Multi-view Imaging, a job and more.

A version of this entry came up earlier than I thought. Here is the corrected version:

Today we have a new paper that is somehow an extension of the phase transition highlighted by Jared Tanner and David Donoho and expands the analysis (with a different concept than the projection of k-neighborly polytopes) to the case where noise is present.
Compressive Sensing over the Grassmann Manifold: a Unified Geometric Framework by Weiyu Xu, Babak Hassibi. The abstract reads:
$\ell_1$ minimization is often used for finding the sparse solutions of an under-determined linear system. In this paper we focus on finding sharp performance bounds on recovering approximately sparse signals using $\ell_1$ minimization, possibly under noisy measurements. While the restricted isometry property is powerful for the analysis of recovering approximately sparse signals with noisy measurements, the known bounds on the achievable sparsity (The "sparsity" in this paper means the size of the set of nonzero or significant elements in a signal vector.) level can be quite loose. The neighborly polytope analysis which yields sharp bounds for ideally sparse signals cannot be readily generalized to approximately sparse signals. Starting from a necessary and sufficient condition, the "balancedness" property of linear subspaces, for achieving a certain signal recovery accuracy, we give a unified \emph{null space Grassmann angle}-based geometric framework for analyzing the performance of $\ell_1$ minimization. By investigating the "balancedness" property, this unified framework characterizes sharp quantitative tradeoffs between the considered sparsity and the recovery accuracy of the $\ell_{1}$ optimization. As a consequence, this generalizes the neighborly polytope result for ideally sparse signals. Besides the robustness in the "strong" sense for \emph{all} sparse signals, we also discuss the notions of "weak" and "sectional" robustness. Our results concern fundamental properties of linear subspaces and so may be of independent mathematical interest.

Of interest are the following two figures where C provides a way to go from the Donoho-Tanner transition to a figure for compressible signals (as opposed to just sparse signals).








at the conclusion of the paper we can read:
It is well known that l_1 optimization can be used to recover ideally sparse signals in compressive sensing, if the underlying signal is sparse enough. While for the ideally sparse signals, the results of [Don06b] have given us very sharp bounds on the sparsity threshold the l_1 minimization can recover, sharp bounds for the recovery of general signals or approximately sparse signals were not available. In this paper we analyzed a null space characterization of the measurement matrices for the performance bounding of `1-norm optimization for general signals or approximately sparse.
Using high-dimensional geometry tools, we give a unified null space Grassmann angle-based analytical framework for compressive sensing. This new framework gives sharp quantitative tradeoffs between the signal sparsity parameter and the recovery accuracy of the l_1 optimization for general signals or approximately sparse signals. As expected, the neighborly polytopes result of [Don06b] for ideally sparse signals can be viewed as a special case on this tradeoff curve. It can therefore be of practical use in applications where the underlying signal is not ideally sparse and where we are interested in the quality of the recovered signal. For example, using the results and their extensions in this paper and [Don06b], we are able to give a precise sparsity threshold analysis for weighted l_1 minimization when prior information about the signal vector is available [KXAH09]. In [XKAH10], using the robustness result from this paper, we are able to show that a polynomial-time iterative weighted l_1 minimization algorithm can provably improve over the sparsity threshold of l_1 minimization for interesting classes of signals, even when prior information is not available. In essence, this work investigates the fundamental “balancedness” property of linear subspaces, and may be of independent mathematical interest. In future work, it is interesting to obtain more accurate analysis for compressive sensing under noisy measurements than presented in the current paper.
In the Donoho and Tanner papers, one could see how to produce the figures. It does not seem obvious in this case and it's a shame as eventually we need to have similar tools for different l_1 solvers or even l_o solvers. I note the use of Random Matrix Theory results in this paper,


Roman Vershynin has some new slides on Random Matrix Theory that is to be presented at the upcoming conference in Random Matrices in Paris.
The program of the conference is finally out and it is here.

Here is a new paper on manifold signal processing applied to imaging. It sure looks very interesting, especially its connection to CT imaging: A Geometric Approach to Multi-view Compressive Imaging by Jae Young Park and Michael B. Wakin. The introduction reads:
I. INTRODUCTION
Armed with potentially limited communication and computational resources, designers of distributed imaging systems face increasing challenges in the quest to acquire, compress, and communicate ever richer and higher-resolution image ensembles. In this paper, we consider multiview imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing (CS) [2, 3] encoding schemes wherein each sensor directly and independently compresses its image using a small number of randomized measurements (see Fig. 1). CS is commonly intended for the encoding of a single signal, and a rich theory has been developed for signal recovery from incomplete measurements by exploiting the assumption that the signal obeys a sparse model. In this paper, we address the problem of how to recover an ensemble of images from a collection of image-by-image random measurements. To do this, we advocate the use of implicitly geometric models to capture the joint structure among the images. CS is particularly useful in two scenarios. The first is when a high-resolution signal is difficult to measure directly. For example, conventional infrared cameras require expensive sensors, and with increasing resolution such cameras can become extremely costly. A compressive imaging camera has been proposed [4] that can acquire a digital image using far fewer (random) measurements than the number of pixels in the image. Such a camera is simple and inexpensive and can be used not only for imaging at visible wavelengths, but also for imaging at nonvisible wavelengths. A second scenario where CS is useful is when one or more high-resolution signals are difficult or expensive to encode. Such scenarios arise, for example, in sensor networks and multi-view imaging, where it may be feasible to measure the raw data at each sensor, but joint, collaborative compression of that data among the sensors would require costly communication. As an alternative to conventional Distributed Source Coding (DSC) methods [5], an extension of single-signal CS known as Distributed CS (DCS) [6] has been proposed, where each sensor encodes only a random set of linear projections of its own observed signal. These projections could be obtained either by using CS hardware as described above, or by using a random, compressive encoding of the data collected from a conventional sensor. While DCS encoding is non-collaborative, an effective DCS decoder should reconstruct all signals jointly to exploit their common structure. As we later discuss, most existing DCS algorithms for distributed imaging reconstruction rely fundamentally on sparse models to capture intra- and inter-signal correlations [6–9]. What is missing from each of these algorithms, however, is an assurance that the reconstructed images have a global consistency, i.e., that they all describe a common underlying scene. This may not only lead to possible confusion in interpreting the images, but more critically may also suggest that the reconstruction algorithm is failing to completely exploit the joint structure of the ensemble. To better extend DCS techniques specifically to problems involving multi-view imaging, we propose in this paper a general geometric framework in which many such reconstruction problems may be cast. In particular, we explain how viewing the unknown images as living along a low dimensional manifold within the high-dimensional signal space can inform the design of effective joint reconstruction algorithms. Such algorithms can build on existing sparsity-based techniques for CS but ensure a global consistency among the reconstructed images. We refine our discussion by focusing on two settings: far-field and near-field multi-view imaging. Finally, as a proof of concept, we demonstrate a “manifold lifting” algorithm in a specific far-field multi-view scenario where the camera positions are not known a priori and we only observe a small number of random measurements at each sensor. Even in such discouraging circumstances, by effectively exploiting the geometrical information preserved in the manifold model, we are able to accurately reconstruct both the underlying scene and the camera positions

Here is another very interesting one: Reconstruction of Sparse Signals from Distorted Randomized Measurements by Petros Boufounos. The abstract reads:

In this paper we show that, surprisingly, it is possible to recover sparse signals from nonlinearly distorted measurements, even if the nonlinearity is unknown. Assuming just that the nonlinearity is monotonic, we use the only reliable information in the distorted measurements: their ordering. We demonstrate that this information is sufficient to recover the signal with high precision and present two approaches to do so. The first uses order statistics to compute the minimum mean square (MMSE) estimate of the undistorted measurements and use it with standard compressive sensing (CS) reconstruction algorithms. The second uses the principle of consistent reconstruction to develop a deterministic nonlinear reconstruction algorithm that ensures that measurements of the reconstructed signal have ordering consistent with the ordering of the distorted measurements. Our experiments demonstrate the superior performance of both approaches compared to standard CS methods.


This one just came up on my radar screen and is behind a paywall [update this link goes to Amit's page directly. Thanks Amit]: Compressive light field imaging by Amit Ashok, Mark Neifeld. The abstract reads:

Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
Let us note the new address of Mark Neifeld.

The next paper reminded me of others and is interesting in that the PSF is far from being a nice little delta: Non-uniform Deblurring for Shaken Images by Oliver Whyte, Josef Sivic, Andrew Zisserman, Jean Ponce. The abstract reads:
Blur from camera shake is mostly due to the 3D rotation of the camera, resulting in a blur kernel that can be significantly non-uniform across the image. However, most current deblurring methods model the observed image as a convolution of a sharp image with a uniform blur kernel. We propose a new parametrized geometric model of the blurring process in terms of the rotational velocity of the camera during exposure. We apply this model to two different algorithms for camera shake removal: the first one uses a single blurry image (blind deblurring), while the second one uses both a blurry image and a sharp but noisy image of the same scene. We show that our approach makes it possible to model and remove a wider class of blurs than previous approaches, including uniform blur as a special case, and demonstrate its effectiveness with experiments on real images.

Dualization of Signal Recovery Problems by Patrick L. Combettes, Dinh Dung, and Bang Cong Vu. The abstract reads:
In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in image (and, more generally, signal) recovery. These problems are not easily amenable to solution by current methods but they feature Fenchel- Moreau-Rockafellar dual problems that can be solved reliably by forward-backward splitting and allow for a simple construction of primal solutions from dual solutions. The proposed framework is shown to capture and extend several existing duality-based signal recovery methods and to be applicable to a variety of new problems beyond their scope.
This one is a little old but since it touches on group testing that shares some overwhelming connection to compressive sensing: Statistical physics of group testing by Marc Mezard, Marco Tarzia and Cristina Toninelli. The abstract reads:
This paper provides a short introduction to the group testing problem, and reviews
various aspects of its statistical physics formulation. Two main issues are discussed: the optimal design of pools used in a two-stage testing experiment, like the one often used in medical or biological applications, and the inference problem of detecting defective items based on pool diagnosis. The paper is largely based on: M. Mezard and C. Toninelli, arXiv:0706.3104i, and M. Mezard and M. Tarzia Phys. Rev. E 76, 041124 (2007).
and

Group testing with Random Pools: Phase Transitions and Optimal Strategy by Marc Mezard, Marco Tarzia and Cristina Toninelli. The abstract reads:
The problem of Group Testing is to identify defective items out of a set of objects by means of pool queries of the form "Does the pool contain at least a defective?". The aim is of course to perform detection with the fewest possible queries, a problem which has relevant practical applications in different fields including molecular biology and computer science. Here we study GT in the probabilistic setting focusing on the regime of small defective probability and large number of objects, $p \to 0$ and $N \to \infty$. We construct and analyze one-stage algorithms for which we establish the occurrence of a non-detection/detection phase transition resulting in a sharp threshold, $\bar M$, for the number of tests. By optimizing the pool design we construct algorithms whose detection threshold follows the optimal scaling $\bar M\propto Np|\log p|$. Then we consider two-stages algorithms and analyze their performance for different choices of the first stage pools. In particular, via a proper random choice of the pools, we construct algorithms which attain the optimal value (previously determined in Ref. [16]) for the mean number of tests required for complete detection. We finally discuss the optimal pool design in the case of finite $p$.
Group testing, random batches and phase transition, that sounds quite familiar, uh ?

Here is a presentation of a poster entitled: Can Compressive Sensing Improve Low-contrast Object Detectability in Accelerated MRI Applications? by Joshua D. Trzasko, Armando Manduca, Zhonghao Bao, Scott O. Stiving, Kiaran P. McGee, Matt A. Bernstein. The description reads:
Purpose: Compressive Sensing (CS) methods can reduce image artifacts when reconstructing undersampled data sets. Most MRI applications of CS, however, have focused on high contrast objects such as gadolinium-enhanced vessels, rather than on low-contrast object detectability (LCOD). Using a novel computational framework, we rigorously examine whether CS reconstruction can improve the LCOD performance of several standard techniques for undersamped MRI reconstruction – across a variety of undersampling rates and strategies. Methods and Materials: The American College of Radiology (ACR) quality control (QC) phantom was imaged on a GE 14.0 1.5T MRI using our routine quality assurance protocol and an 8-channel head coil. The raw k-space data corresponding to the 5.1% contrast detectability plane was retrospectively undersampled along the phase-encoded direction at 10 different rates (10-100%) and, for each, using 3 different distribution strategies: 1) low-frequency band only; 2) uniform sampling; 3) random sampling (Poisson Disk). For the latter case, 5 sampling instances were generated at each rate. Each undersampled data set was reconstructed using 3 different strategies: 1) zero-filling with root sum-of-squares combination; 2) Tikhonov-SENSE; and 3) Compressive Sensing (L1-minimization, finite difference sparsity). Reconstruction results were then analyzed with our in-house developed QC software to automatically determine the fraction of complete visually detectable spokes which, is a measure of LCOD performance. Results: Across all sampling rates and under all sampling strategies, the LCOD score for CS reconstructions was consistently equal to or higher than that of the other two reconstruction methods. The CS advantage was especially pronounced at very low sampling rates (£30%). Conclusion: Although most CS work to date has focused on high-contrast objects, CS reconstructions consistently improved LCOD compared to several standard MRI reconstruction techniques for undersampled data. These results suggest that CS reconstruction is useful not only for undersampled high contrast objects, but can improve LCOD as well.
Version 2 of Estimation with Random Linear Mixing, Belief Propagation and Compressed Sensing is now available.


Namrata Vaswani will give a talk at UCLA entitled: Recursive Reconstruction of Sparse Signal Sequences. Tuesday, May 25, 2010 at 11:00am, Engr IV Maxwell Room 57-124

Abstract
Consider the problem of recursively and causally reconstructing a time sequence of sparse signals from a greatly reduced number of linear projection measurements at each time. The signals are sparse in some transform domain referred to as the sparsity basis and their sparsity patterns can change with time. Some key applications where this problem occurs include dynamic MR imaging for real-time applications such as MR image-guided surgery or real-time single-pixel video imaging. Since the recent introduction of compressive sensing (CS), the static sparse reconstruction problem has been thoroughly studied. But most existing algorithms for the dynamic problem are batch solutions with very high complexity. Using the empirically observed fact that sparsity patterns change slowly over time, the recursive reconstruction problem can be formulated as one of sparse reconstruction with partially known support. We develop two classes of approaches to solve this problem - CS-Residual and Modified-CS, both of which have the same complexity as CS at a single time instant (simple CS), but achieve exact/accurate reconstruction using much fewer measurements.

Under the practically valid assumption of slowly changing support, Modified-CS achieves provably exact reconstruction using much fewer noise-free measurements than those needed to provide the same guarantee for simple CS. When using noisy measurements, under fairly mild assumptions, and again using fewer measurements, it can be shown that the error bounds for both Modified-CS and CS-Residual are much smaller; and most importantly, their errors are "stable" (remain bounded by small time-invariant values) over time. The proof of stability is critical for any recursive algorithm since it ensures that the error does not blow up over time. Experiments for the dynamic MRI application back up these claims for real data. Important extensions that also use the slow change of signal values over time are developed.


There is a job announcement at Qualcomm that includes as a requirement the need to know about compressed sensing.

Requisition # G1880018
Job Title Sr Systems Engineer Algorithms / Corporate R&D
Post Date 5/20/2010
Division Corporate Research & Development
Job Area Engineering - Systems
Location California - San Diego
Job Function QUALCOMM Corporate R&D has a mission to advance wireless and mobile communications to create compelling user experience with next generation mobile devices. We seek a Systems Engineer with demonstrable knowledge and experience in algorithms and modeling, embedded device architecture, protocols and system design.

We expect our systems engineers to keep up to date and understand technological advances in areas directly and indirectly related to mobile and wireless systems and devices and channel those trends toward enabling new applications. To that end, the responsibilities include developing new system architectures, protocols, services, standards, middleware or hardware enhancements. Skills/Experience
  • Algorithm design and analysis
  • Statistics and data mining
  • Machine learning/robotics
  • Data modeling and analysis
  • Graph theory and information theory
  • Real-time, embedded and/or distributed systems
  • Mobile and/or hardware optimizations
  • Sensor data analysis and compressed sensing
Responsibilities Candidates will be providing deep understanding and expertise in algorithms, data acquisition, data modeling and analysis.

Will also work in information theory, compressed sensing, theory and practice of machine learning, communication systems, automatic control systems.

Implementation and architecture of mobile platforms, including special purpose hardware engines, data buses and power management.
Education Requirements Required: Master's, Computer Engineering and/or Computer Science and/or Electrical Engineering
Preferred: Doctorate, Computer Engineering and/or Computer Science and/or Electrical Engineering


I'll add it to the compressive sensing jobs page.

Friday, May 21, 2010

CS: Around the blogs in 80 hours (part deux), Lightguides in CMOS, An analog snapshot spectral imager

Bob Sturm kindly responded to my taunt in Paper of the Day (Po'D): Compressed Sensing and Coherent Dictionaries Edition. This entry points to his first Paper of the Day entry which point to this paper On Compressed Sensing and Its Application to Speech and Audio Signals by Mads Christensen, Jan Østergaard, and Søren Holdt Jensen. The paper ends with this last sentence

This basically means that sparse decompositions with compressed sensing works no worse than sparse decompositions did in the first place
My take from this is: In an engineering field where sensors can already acquire signals with high frequency and fidelity, for a compressive microphone to exist would require a niche market that the current microphones are not answering. What would that niche market be ?

Thanks Bob.

* Alex Gittens has a question on Group sparsification.
* Gonzalo Vazquez-Vilar does a Quick survey on wideband spectrum sensing for CR.
* The following blog might look like one of those spam ridden honeypots filling the interwebs but in fact it is not. It is a genuine source of information on imaging sensors and I learn from it every time I check it out. Here is the address:


Of interest this time is this entry:


What the different between Back Side Illumination (BSI) and Front Side Illumination (FSI) (from here)


In the entry, one can read this explanation of the underlying technology:



The description of the lightguides, made me think of the Anger Coding Scheme as mentioned in one of the Technologies that do not Exist (the series is here). Maybe I should be coming back to that later.

I initially came to that site from David Brady's blog who released the following entries in April:


Generalization of the Lyot filter and its application to snapshot spectral imaging by Alistair Gorman, David William Fletcher-Holmes, and Andrew Robert Harvey. The abstract reads:
A snapshot multi-spectral imaging technique is described which employs multiple cascaded birefringent interferometers to simultaneously spectrally filter and demultiplex multiple spectral images onto a single detector array. Spectral images are recorded directly without the need for inversion and without rejection of light and so the technique offers the potential for high signal-to-noise ratio. An example of an eight-band multi-spectral movie sequence is presented; we believe this is the first such demonstration of a technique able to record multi-spectral movie sequences without the need for computer reconstruction.
Obviously, the fact that there is no need for computer reconstruction makes this system not a compressive imager the way we usually define it and the question then becomes: Can additional mixing be undertaken with the same system so that a computer reconstruction is needed. In that case, how much does such an approach bring in terms of performance ?

Thursday, May 20, 2010

CS: Around the blogs in 80 hours, Sparse Recovery Algorithms: Sufficient Conditions in terms of Restricted Isometry Constants and a PhD Position.

Yesterday's entry triggered two comments. One of them reads as follows:

Dear Igor,

You have to be careful: the paper of Candes et al. use an *analysis* prior, which means they consider the L1 norm of the inner product with the atoms, not the L1 norm of the coefficients in the dictionary.

For ortho systems, these are equivalent, but for redundant dictionaries, the more your dictionary is coherent, the more different these two reconstructions differ.

This is why this (wonderful) result tells another story. I think it is important to make the distinction between these "two kinds of L1", analysis vs. synthesis.

For some redundant systems (but not all !), which are tight frames (or approximately tight), like curvelet, etc. this analysis prior makes lots of sense, because for some image classes (eg. cartoon) one knows that the analysis is sparse.

Another fascinating question is for non tight frame analysis prior like TV : what can you tell.


Bob Sturm wrote about the recent Probabilistic Matching Pursuit algorithm featured here recently in :

Laurent Jacques wonders about a New class of RIP matrices ? What's a submodular function you ask, I am glad you did, I had the same question. Here is an answer.

I mentioned Random Matrix Theory a while back, Terry Tao has some news results that he explains in Random matrices: Localization of the eigenvalues and the necessity of four moments. He makes a reference to the book An Introduction to Random Matrices by Greg Anderson, Alice Guionnet and Ofer Zeitouni. Of related interest:

Finally, Simon Foucart let me know of one of his proceedings paper entitled: Sparse Recovery Algorithms: Sufficient Conditions in terms of Restricted Isometry Constants. The abstract reads:
We review three recovery algorithms used in Compressive Sensing for the reconstruction s-sparse vectors x 2 CN from the mere knowledge of linear measurements y = Ax 2 Cm, m 'ess than N. For each of the algorithms, we derive improved conditions on the restricted isometry constants of the measurement matrix A that guarantee the success of the reconstruction. These conditions are d2s less than 0.4652 for basis pursuit, d3s less than 0.5 and d2s less than 0.25 for iterative hard thresholding, and d4s less than 0.3843 for compressive sampling matching pursuit. The arguments also applies to almost sparse vectors and corrupted measurements. The analysis of iterative hard thresholding is surprisingly simple. The analysis of basis pursuit features a new inequality that encompasses several inequalities encountered in Compressive Sensing.

Finally, Laurent Jacques is advertizing a PhD position related to compressed sensing:
PhD position on "Solving Optical Inverse Problems by Promoting Sparsity

Université catholique de Louvain (UCL)
Louvain-la-Neuve, Belgique.

DESCRIPTION:

Nowadays, assuming that a signal (e.g., a 1-D signal, an image or a volume of data) has a sparse representation, i.e., that this signal is linearly described with few elements taken in a suitable basis, is an ubiquitous hypothesis validated in many different scientific domains. Interestingly, this sparsity assumption is the heart of methods solving inverse problems, i.e., estimating a signal from some linear distorting observations. Sparsity stabilizes (or regularizes) these signal estimation techniques often based on L1-norm (or Total Variation norm) minimization and greedy methods.

The PhD project concerns the application of the sparsity principle to solve particular inverse problems occurring in optics. Often with optical sensors, the observation of objects of interest is mainly corrupted by two elements: the noise, either due to the sensor (e.g., electronic/thermic noise) or to the light physics (e.g., Poisson noise), and the Point Spread Function (PSF) of the sensor which blurs (or convolves) the pure image.

The student will develop mathematical methods and algorithms for image denoising and image deconvolution common for confocal microscopy, deflectometry or interferometry, where the sparsity assumption provides significant gains compared to former methods like back-projections, least square methods, or Tikhonov regularization. Connections will be also established with the recent field of compressed sensing, where the sparsity principle really drives the design of innovative optical sensors.

ADVISORS:
* Prof. Philippe Antoine (IMCN, UCL)
* Dr Laurent Jacques (ICTEAM, UCL)
* Prof. C. De Vleeschouwer (ICTEAM, UCL)

COLLABORATORS:
* Prof. François Chaumont (ISV, UCL)
* Prof. Batoko Henri (ISV, UCL)
* Abdelmounaim Errachid (ISV, UCL)

INDUSTRIAL PARTNERSHIP:
* Lambda-X S.A., Nivelles, Belgium.

PROFILE:
* M.Sc. in Applied Mathematics, Physics, Electrical Engineering, or Computer Science;
* Knowledge (even partial) in the following topics constitutes assets:
o Convex Optimization methods,
o Signal/Image Processing,
o Classical Optics,
o Compressed Sensing and inverse problems.
* Experience with Matlab, C and/or C++.
* Good communications skills, both written and oral;
* Speaking fluently in English or French is required. Writing in English is mandatory.

WE OFFER:

* A research position in a dynamic and advanced high-tech environment, working on leading-edge technologies and having many international contacts.
* Funding for the beginning of the thesis with the possibility to extend it by a couple of years or to apply for a Belgian NSF grant.

APPLICATION:

Applications should include a detailed resume, copy of grade sheets for B.Sc. and M.Sc. Names and complete addresses of referees are welcome.

Please send applications by email to

laurent.jacques@uclouvain.be
ph.antoine@uclouvain.be
christophe.devleeschouwer@uclouvain.be

Questions about the subject or the position should be addressed to the same email addresses.


Credit: NASA / JPL / SSI / processed by Astro0/unmannedspaceflight.com, Peering through the fountains of Enceladus, Here, unmannedspaceflight.com forum member Astro0 has combined two frames captured by Cassini during its May 18 encounter with Enceladus to include both the backlit plumes and the background imagery: Titan and the rings.

Wednesday, May 19, 2010

CS: "Perhaps coherence is not necessary"


In the Crossing the Donoho-Tanner Border, I mentioned the following:

One could also, as I jokingly suggested, increase the size of the problem so that something that was barely sparse in one instance becomes highly sparse in the new setting.

and pinpointed how to read the Donoho-Tanner phase transition to see what to expect from l_1 minimization solvers. I am also sure y'all have noticed in the recent long post of the week that Emmanuel Candès, Yonina Eldar and Deanna Needell came up with a more mind blowing argument in: Compressed sensing with coherent and redundant dictionaries. If you don't want to read the previous entry, the abstract reads:
This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an l_1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence
restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of l_1-analysis for such problems.

Of considerable interest:

...Hence, the result says that for any dictionary, signals f such that D* f decays rapidly can be approximately reconstructed using l1-analysis from just a few random measurements....In words, if the columns of the Gram matrix are reasonably sparse and if f happens to have a sparse expansion, then the frame coe cient sequence D* f is also sparse. All the transforms discussed above, namely, the Gabor, curvelet, undecimated wavelet, oversampled Fourier transform all have nearly diagonal Gram matrices and thus, sparse columns....


And then you have to read page 7 with it's little surprise. I wonder how our audio friends would think of that.... yeah.... I am talking to you Bob. I know you got two papers out, but this is important stuff :)

With regards to yesterday's question, Mingyan Liu tells me her group is currently investigating these sparse matrices.