Friday, March 05, 2010

CS: One of the longest entry

After the Wired article and the less technical but important entry that followed, let us come back to our favorite subject. This entry is going to be long and has information from many different outlooks. Enjoy!

Before I went on a break, I asked Dick Gordon to comment on the following paper: SART-type Image Reconstruction from a Limited Number of Projections with the Sparisity Constraint by Hengyong Yu and Ge Wang. The abstract reads:

Based on the recent mathematical findings on solving the linear inverse problems with sparsity constraints by Daubechies et al., here we adapt a simultaneous algebraic reconstruction technique (SART) for image reconstruction from a limited number of projections subject to a sparisity constraint in terms of an invertible compression transform. The algorithm is implemented with an exemplary Haar wavelet transform and tested with a modified Shepp-Logan phantom. Our preliminary results demonstrate that the sparsity constraint helps effectively improve the quality of reconstructed images and reduce the number of necessary projections.
Dick (one of the author of the ART algorithm) kindly replied with the following:
Dear Igor,
Last time I reviewed the problem of limited view (# and/or angle range) computed tomography was:

Rangayyan, R.M., A.P. Dhawan & R. Gordon (1985). Algorithms for limited-view computed tomography: an annotated bibliography and a challenge. Applied Optics 24(23), 4000-4012.

published simultaneously with:

Dhawan, A.P., R.M. Rangayyan & R. Gordon (1985). Image restoration by Wiener deconvolution in limited-view computed tomography. Applied Optics 24(23), 4013-4020.

as part of:

Gordon, R. (1985). Industrial applications of computed tomography and NMR imaging: an OSA topical meeting. (Invited). Applied Optics 24(23), 3948-3949.

An earlier article was:

Dhawan, A.P., R.M. Rangayyan & R. Gordon (1984). Wiener filtering for deconvolution of geometric artifacts in limited-view image reconstruction. Proc. SPIE 515, 168-172.

Now, only one paper not by my group cited the Wiener filtration work:

Cha, S.S. & H.W. Sun (1990). Tomography for reconstructing continuous fields from ill-posed multidirectional interferometric data. Applied Optics 29(2), 251-258.(attached)

though they didn’t actually try it. You will note that Yu & Wang (2010) did not try the acid test of both a small number of angles and limited angle range, which is what we did. So, I think it’s time for someone (you, maybe?) to bite the bullet, resurrect Wiener filtration of other deconvolution methods, and get on with testing whether
deconvolution of the point spread function of any solution method for underdetermined equations gives a better result than the method itself. After that, it will be time for another review article on the subject.
Thanks Dick !


The googlebot found a presentation that includes initial findings on compressive sensing for LIDAR by Charles Creusere. It also found: Sparse Multipath Channel Estimation Using Compressive Sampling Matching Pursuit Algorithm by Guan Gui, Qun Wan, Wei Peng and Fumiyuki Adachi. The abstract reads:
Broadband wireless channel is a time dispersive channel and become strongly frequency-selective. However, in most cases, the channel is composed of a few dominant taps and a large part of approximate zero taps or zero. To exploit the sparsity of multi-path channel (MPC), two methods have been proposed. They are, namely, greedy algorithm and convex program. Greedy algorithm is easy to be implemented but not stable; on the other hand, the convex program method is stable but difficult to be implemented as practical channel estimation problems. In this paper, we introduce a novel channel estimation strategy using compressive sampling matching pursuit (CoSaMP) algorithm which was proposed in [1]. This algorithm will combine the greedy algorithm with the convex program method. The effectiveness of the proposed algorithm will be confirmed through comparisons with the existing methods.

Arxiv also delivered the following: Sequential Compressed Sensing by Dmitry Malioutov, Sujay Sanghavi, Alan Willsky. The abstract reads:
Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples required for a given level of signal sparsity. However, using these bounds to minimize the number of samples requires a-priori knowledge of the sparsity of the unknown signal, or the decay structure for near-sparse signals. Furthermore, there are some popular recovery methods for which no such bounds are known.
In this paper, we investigate an alternative scenario where observations are available in sequence. For any recovery method, this means that there is now a sequence of candidate reconstructions. We propose a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence. This estimate is universal in the sense that it is based only on the measurement ensemble, and not on the recovery method or any assumed level of sparsity of the unknown signal. With these estimates, one can now stop observations as soon as there is reasonable certainty of either exact or sufficiently accurate reconstruction. They also provide a way to obtain "run-time" guarantees for recovery methods that otherwise lack a-priori performance bounds.
We investigate both continuous (e.g. Gaussian) and discrete (e.g. Bernoulli) random measurement ensembles, both for exactly sparse and general near-sparse signals, and with both noisy and noiseless measurements.

and version 2 of Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds
by Galen Reeves and Michael Gastpar is out. The abstract reads:
Recovery of the sparsity pattern (or support) of a sparse vector from a small number of noisy linear projections (or samples) is a common problem that arises in signal processing and statistics. In this paper, the high-dimensional setting is considered. It is shown that if the sampling rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the sampling rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models.
I totally missed this one but was reminded of it thanks to an anonymous commenter: Compressive Rendering: A Rendering Application of Compressed Sensing by Pradeep Sen, and Soheil Darabi, The abstract reads:
Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75% of the pixel samples using a non-adaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity.
I'll come back to it later. In a different area, I am not sure the authors realize how interesting the next paper is: Sparse Legendre expansions via l_1-minimization by Holger Rauhut, Rachel Ward. The abstract reads:
We consider the recovery of polynomials that are sparse with respect to the basis
of Legendre polynomials from a small number of random sampling points. We show
that a Legendre s-sparse polynomial of maximal degree N can be recovered from m of the order of s log^4(N) random samples that are chosen independently according to the Chebyshev probability measure \pi^(-1)(1-x^2)^(-1/2)dx on [-1; 1]. As an efficient recovery method, l_1-minimization can be used. We establish these results by showing the restricted isometry property of a preconditioned random Legendre matrix. Our results extend to a large class of orthogonal polynomial systems. As a byproduct, we obtain condition number estimates for preconditioned random Legendre matrices that should be of interest on their own.
The use of Legendre polynomials is central in most schemes (Pn, Sn) devised in solving the linear transport equation or the radiative transfer equation.

The following three papers may be changing our reliance on the RIP concept: Shifting Inequality and Recovery of Sparse Signals by Tony Cai, Lie Wang and Guangwu Xu. The abstract reads:
In this paper we present a concise and coherent analysis of the constrained l1 minimization method for stable recovering of high-dimensional sparse signals both in the noiseless case and noisy case. The analysis is surprisingly simple and elementary, while leads to strong results. In particular, it is shown that the sparse recovery problem can be solved via l1 minimization under weaker conditions than what is known in the literature. A key technical tool is an elementary inequality, called Shifting Inequality, which, for a given nonnegative decreasing sequence, bounds the l2 norm of a subsequence in terms of the l1 norm of another subsequence by shifting the elements to the upper end.
New Bounds for Restricted Isometry Constants by Tony Cai, Lie Wang and Guangwu Xu. The abstract reads:
In this paper we show that if the restricted isometry constant \delta_k of the compressed sensing matrix satisfi es
\delta_k \lt 0:307
then k-sparse signals are guaranteed to be recovered exactly via l_1 minimization when no noise is present and k-sparse signals can be estimated stably in the noisy case. It is also shown that the bound cannot be substantively improved. An explicitly example is constructed in which \delta_k = (k-1)/(2k-1) \lt 0.5, but it is impossible to recover certain k-sparse signals.
Stable Recovery of Sparse Signals and an Oracle Inequality by Tony Cai, Lie Wang and Guangwu Xu. The abstract reads:
This article considers sparse signal recovery in the presence of noise. A mutual incoherence condition which was previously used for exact recovery in the noiseless case is shown to be sufficient for stable recovery in the noisy case. Both bounded noise and Gaussian noise settings are considered. Furthermore, the condition is shown to be sharp. In addition, an oracle inequality is given under the mutual incoherence condition.
My webcrawler found the following: Compressive sensing for feedback reduction in MIMO broadcast channels by Syed T. Qaseem and Tareq Y. Al-Naffouri, Samir Alghadhban. The abstract reads:
We propose a generic feedback channel model, and compressive sensing based opportunistic feedback protocol for feedback resource (channels) reduction in MIMO Broadcast Channels under the assumption that both feedback and downlink channels are noisy and undergo block Rayleigh fading. The feedback resources are shared and are opportunistically accessed by users who are strong (users above a certain fixed threshold). Strong users send same feedback information on all shared channels. They are identified by the base station via compressive sensing. The proposed protocol is shown to achieve the same sumrate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing only logarithmically with number of users.



The following paper lies behind a paywall: Point spread function engineering for iris recognition system design by Amit Ashok and Mark A. Neifeld. The abstract reads:

Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8×8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

I also found is: Fast orthogonal sparse approximation algorithms over local dictionaries by Boris Mailhé, Rémi Gribonval, Pierre Vandergheynst, Frédéric Bimbot. The abstract reads:
In this work we present a new greedy algorithm for sparse approximation called LocOMP. LocOMP is meant to be run on local dictionaries made of atoms with much shorter supports than the signal length. This notably encompasses shift-invariant dictionaries and time-frequency dictionaries, be they monoscale or multiscale. In this case, very fast implementations of Matching Pursuit are already available. LocOMP is almost as fast as Matching Pursuit while approaching the signal almost as well as the much slower Orthogonal Matching Pursuit.
Following my bleg, Alessandro Foi sent me the following:
....

First of all, many thanks for keeping up the blog. It is a very valuable service to the community.

About the "Big Picture" given at http://igorcarron.googlepages.com/cs :
I have noticed that the link to our work "Compressed sensing image reconstruction via recursive spatially adaptive filtering" is broken.

Could you please fix the link pointing it to our website http://www.cs.tut.fi/~comsens/ ? ...

Which I did. If more of you find mistakes in the blog or on the attendant pages, please let me know. On Twitter, Pierre Vandergheynst reminded me that their spread spectrum technique was already included in some MRI scanners, an information I added to the compressive sensing hardware page. Also on Twitter, I found the following Postdoc in Houston from Wotao Yin's Twitter's stream.

March 3, 2010. Joint Postdoc Position at Rice University and University of Houston

The Department of Computational and Applied Mathematics at Rice University and the Department of Electrical and Computer Engineering at the University of Houston invite applications for a joint postdoctoral research associate position.

The initial term of appointment is one year with a possible 2nd/3rd year extension. The term of appointment will begin on or after August 1, 2010. University of Houston will be the primary host during the first year. The salary will be competitive.
Research fields

The focus of the research is on compressed sensing, machine learning, optimization, as well as their applications especially in signal processing and wireless communications.

For more information on related research, see:

* wireless.egr.uh.edu
* www.caam.rice.edu/~optimization/L1/
* dsp.rice.edu/cs

Qualification

Candidates should have finished a PhD in applied/computational mathematics, electrical engineering, statistics, operations research, or a related field by August 2010 or the time of appointment and no earlier than December 2006.
City of Houston

Rice University and University of Houston are located 4 miles apart in Houston. For more information on Houston, which was recently ranked #1 city in the US to live, work, and play, see: www.kiplinger.com/features/archives/2008/05/2008-best-city-houston.html
Contact

Applicants should send a letter of application, current vita, and descriptions of research plans to both wyin AT rice.edu and zhan2 AT mail.uh.edu. At least three letters of reference will be required.

Rice University and the University of Houston are Affirmative Action, Equal Opportunity institutions.
The announcement is now listed on the Compressive Sensing Jobs page.

Also on Twitter, Suresh mentioned his recent preprint: A Unified Algorithmic Framework for Multi-Dimensional Scaling by Arvind Agarwal, Jeff M. Phillips, Suresh Venkatasubramanian. The abstract:
In this paper, we propose a unified algorithmic framework for solving many known variants of \mds. Our algorithm is a simple iterative scheme with guaranteed convergence, and is \emph{modular}; by changing the internals of a single subroutine in the algorithm, we can switch cost functions and target spaces easily. In addition to the formal guarantees of convergence, our algorithms are accurate; in most cases, they converge to better quality solutions than existing methods, in comparable time. We expect that this framework will be useful for a number of \mds variants that have not yet been studied.
Our framework extends to embedding high-dimensional points lying on a sphere to points on a lower dimensional sphere, preserving geodesic distances. As a compliment to this result, we also extend the Johnson-Lindenstrauss Lemma to this spherical setting, where projecting to a random $O((1/\eps^2) \log n)$-dimensional sphere causes $\eps$-distortion.
The code is here.
That is all for today!


Credit: NASA/JPL/Space Science Institute, This raw, unprocessed image of Helene was taken by Cassini on March 3, 2010. The image was taken in visible light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 19,000 kilometers (12,000 miles) from Helene. Image scale is 113 meters (370 feet) per pixel.

2 comments:

Anonymous said...

i think you got the link on the bio of the author @ geomblog suresh's paper with arvind agarwal wrong. it should be this:

Arvind Agarwal

http://www.cs.utah.edu/~arvind/

Igor said...

Fixed. Thanks!

Igor.

Printfriendly