Pages

Friday, July 31, 2009

CS: calendar, CS Block Map-LMS Adaptive Filter, Democracy in Action, and more.


Thomas Strohmer sent me an e-mail mentioning the following:
Maybe you want to include in the calendar of events that a few compressed sensing sessions will take place in connection with the SPIE Wavelets conference, see this link: http://spie.org//app/program/index.cfm?fuseaction=conferencedetail&conference=7446&jsenabled=1

I'll add those shortly to the Compressive Sensing Calendar. Thank you Thomas !

Hadi Zayyani asked me to host his paper on my site, which is something I don't do often:

Compressed Sensing Block Map-LMS Adaptive Filter for Sparse Channel Estimation and a Bayesian Cramer-Rao Bound by Hadi Zayyani, Massoud Babaie-Zadeh and Christian Jutten. The abstract reads:
This paper suggests to use a Block MAP-LMS (BMAPLMS) adaptive filter instead of an Adaptive Filter called MAP-LMS for estimating the sparse channels. Moreover to faster convergence than MAP-LMS, this block-based adaptive filter enables us to use a compressed sensing version of it which exploits the sparsity of the channel outputs to reduce the sampling rate of the received signal and to alleviate the complexity of the BMAP-LMS. Our simulations show that our proposed algorithm has faster convergence and less final MSE than MAP-LMS, while it is more complex than MAP-LMS. Moreover, some lower bounds for sparse channel estimation is discussed. Specially, a Cramer-Rao bound and a Bayesian Cramer-Rao bound is also calculated.

In light of this hosting, here is a paper with a quite fitting title (from the Rice repository)

Democracy in Action: Quantization, Saturation, and Compressive Sensing by by Jason Laska, Petros Boufounos , Mark Davenport and Richard Baraniuk. The abstract reads:
Recent theoretical developments in the area of compressive sensing (CS) have the potential to significantly extend the capabilities of digital data acquisition systems such as analog-todigital converters and digital imagers in certain applications. The key hallmark of CS that has been the focus of the community so far is the fact that CS enables sub-Nyquist sampling for signals, images, and other data that have a sparse representation in some basis. In this paper, we explore and exploit another heretofore relatively unexplored hallmark, the fact that certain CS measurement systems are democratic, which means that each measurement carries roughly the same amount of information about the signal being acquired. Using the democracy property, we re-think how to quantize the compressive measurements in practical CS systems. If we were to apply the conventional wisdom gained from conventional Shannon-Nyquist uniform sampling, then we would scale down the analog signal amplitude (and therefore increase the quantization error) to avoid the gross saturation errors that occur when the signal amplitude exceeds the quantizer’s dynamic range. In stark contrast, we demonstrate a CS system achieves the best performance when we operate at a significantly nonzero saturation rate. We develop two methods to recover signals from saturated CS measurements. The first directly exploits the democracy property by simply discarding the saturated measurements. The second integrates saturated measurements as constraints into standard linear programming and greedy recovery techniques. Finally, we develop a simple automatic gain control system that uses the saturation rate to optimize the input gain.


Also found on the interwebs, some of these items are a little "old" but I don't think I cover them before:

A Fast Posterior Update for Sparse Underdetermined Linear Models by Lee Potter, Phil Schniter, and Justin Ziniel. The abstract reads:
A Bayesian approach is adopted for linear regression, and a fast algorithm is given for updating posterior probabilities. Emphasis is given to the underdetermined and sparse case, i.e., fewer observations than regression coefficients and the belief that only a few regression coefficients are non-zero. The fast update allows for a low-complexity method of reporting a set of models with high posterior probability and their exact posterior odds. As a byproduct, this Bayesian model averaged approach yields the minimum mean squared error estimate of unknown coefficients. Algorithm complexity is linear in the number of unknown coefficients, the number of observations and the number of nonzero coefficients. For the case in which hyperparameters are unknown, a maximum likelihood estimate is found by a generalized expectation maximization algorithm.

A Sparsity Detection Framework for On-Off Random Access Channels by Alyson Fletcher, Sundeep Rangan, Vivek Goyal. The abstract reads:
This paper considers a simple on–off random multiple access channel (MAC), where n users communicate simultaneously to a single receiver. Each user is assigned a single codeword which it transmits with some probability \lambda over m degrees of freedom. The receiver must detect which users transmitted. We show that detection for this random MAC is mathematically equivalent to a standard sparsity detection problem. Using new results in sparse estimation we are able to estimate the capacity of these channels and compare the achieved performance of various detection algorithms. The analysis provides insight into the roles of power control and multi-user detection.
Found on the Arxiv site:

Distributed MIMO radar using compressive sampling
by Athina Petropulu, by Yao Yu, Athina Petropulu, H. Vincent Poor, H. Vincent Poor. The abstract:
A distributed MIMO radar is considered, in which the transmit and receive antennas belong to nodes of a small scale wireless network. The transmit waveforms could be uncorrelated, or correlated in order to achieve a desirable beampattern. The concept of compressive sampling is employed at the receive nodes in order to perform direction of arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOAs of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center.
Presentations also found include:
Finally, Laurent Jacques mentions on his site that his recent paper entitled "Dequantizing Compressed Sensing with Non-Gaussian Constraints" co-written with D. K. Hammond and M. J. Fadili has been (slightly) updated.


Image Credit: NASA/JPL/Space Science Institute, image of Saturn taken on July 23 from Cassini.

Thursday, July 30, 2009

CS: Bring out the popcorn, MLSS09 videos are out.

Bring out the popcorn because if you thought you'd have a dull summer the organizers at the Machine Learning Summer School on Theory and Practice of Computational Learning, MLSS09 decided to steer you away from boredom by getting most of the talks/tutorials on video (big kudos to Mikhail Belkin, Partha Niyogi, Steve Smale).The site of the conference is here. First video features a tutorial of 3 hours (cut in three parts) by Emmanuel Candes on An Overview of Compressed Sensing and Sparse Signal Recovery via L1 Minimization (part 2 of the tutorial is the Compressed Sensing presentation).

Description
In many applications, one often has fewer equations than unknowns. While this seems hopeless, the premise that the object we wish to recover is sparse or nearly sparse radically changes the problem, making the search for solutions feasible. This lecture will introduce sparsity as a key modeling tool together with a series of little miracles touching on many areas of data processing. These examples show that finding *that* solution to an underdetermined system of linear equations with minimum L1 norm, often returns the ''right'' answer. Further, there is by now a well-established body of work going by the name of compressed sensing, which asserts that one can exploit sparsity or compressibility when acquiring signals of general interest, and that one can design nonadaptive sampling techniques that condense the information in a compressible signal into a small amount of data - in fewer data points than were thought necessary. We will survey some of these theories and trace back some of their origins to early work done in the 50's. Because these theories are broadly applicable in nature, the tutorial will move through several applications areas that may be impacted such as signal processing, bio-medical imaging, machine learning and so on. Finally, we will discuss how these theories and methods have far reaching implications for sensor design and other types of designs.

Of related interest two tutorials:
that are also both about 3 hours long. And then several 1 hour presentations:


All of them are featured on the Compressive Sensing Videos page.

Image Credit: Thierry Legault. Stunning photo of the space shuttle Endeavor docked with the International Space Station crossing the face of the sun. Via Wired.

Tuesday, July 28, 2009

CS: Seismic sampling, Compressive Sensing for MIMO Radar


Felix Hermann has two new presentations out on the work performed at his lab at UBC:

There is also a new version of A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations by Mark Tygert.

Finally, I just found this paper on arxiv: Compressive Sensing for MIMO Radar by Yao Yu, Athina Petropulu, H. Vincent Poor. The abstract reads:
Multiple-input multiple-output (MIMO) radar systems have been shown to achieve superior resolution as compared to traditional radar systems with the same number of transmit and receive antennas. This paper considers a distributed MIMO radar scenario, in which each transmit element is a node in a wireless network, and investigates the use of compressive sampling for direction-of-arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOA of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center for further processing.

Monday, July 27, 2009

CS: CS for radio interferometry, Probe Design for Compressive Sensing DNA Microarrays, Search for IMRT inverse plans using CS, Benchmark trial


Yves Wiaux let me know that I previously mentioned the conference paper whereas what follows is the paper submission. Here it is: Compressed sensing for radio interferometry: spread spectrum imaging techniques by Yves Wiaux, Gilles Puy, Yannick Boursier and Pierre Vandergheynst.

We consider the probe of astrophysical signals through radio interferometers with small field of view and baselines with non-negligible and constant component in the pointing direction. In this context, the visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. In light of the recent theory of compressed sensing and in the perspective of defining the best possible imaging techniques for sparse signals, we analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary. Our results rely both on theoretical considerations related to the mutual coherence between the sparsity and sensing dictionaries, as well as on numerical simulations.


For some reason, I never mentioned that paper but here it is: Probe Design for Compressive Sensing DNA Microarrays by Wei Dai, Mona A. Sheikh, Olgica Milenkovic, and Richard G. Baraniuk. The abstract reads:
Compressive Sensing Microarrays (CSM) are DNA based sensors that operate using group testing and compressive sensing (CS) principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM each sensor responds to a group of targets. We study the problem of designing CS probes that simultaneously account for both the constraints from group testing theory and the biochemistry of probe-target DNA hybridization. Our results show that, in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, experiments show that out of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications for which only short hybridization times are allowed.


I could not find the original paper but it showed up on my radar screen, here is the abstract

Search for IMRT inverse plans with piecewise constant fluence maps using compressed sensing techniques by Lei Zhu and Lei Xing. The abstract reads:
An intensity-modulated radiation therapy (IMRT) field is composed of a series of segmented beams. It is practically important to reduce the number of segments while maintaining the conformality of the final dose distribution. In this article, the authors quantify the complexity of an IMRT fluence map by introducing the concept of sparsity of fluence maps and formulate the inverse planning problem into a framework of compressing sensing. In this approach, the treatment planning is modeled as a multiobjective optimization problem, with one objective on the dose performance and the other on the sparsity of the resultant fluence maps. A Pareto frontier is calculated, and the achieved dose distributions associated with the Pareto efficient points are evaluated using clinical acceptance criteria. The clinically acceptable dose distribution with the smallest number of segments is chosen as the final solution. The method is demonstrated in the application of fixed-gantry IMRT on a prostate patient. The result shows that the total number of segments is greatly reduced while a satisfactory dose distribution is still achieved. With the focus on the sparsity of the optimal solution, the proposed method is distinct from the existing beamlet- or segment-based optimization algorithms.

This is outside of CS but a nonetheless interesting application of benchmarks. One could also argue since CS is a dimensionality reduction technique, it is a good fit:

Does Dimensionality Reduction Improve the Quality of Motion Interpolation? by Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar . The abstract reads:

In recent years nonlinear dimensionality reduction has frequently been suggested for the modelling of high-dimensional motion data. While it is intuitively plausible to use dimensionality reduction to recover low dimensional manifolds which compactly represent a given set of movements, there is a lack of critical investigation into the quality of resulting representations, in particular with respect to generalisability. Furthermore it is unclear how consistently particular methods can achieve good results. Here we use a set of robotic motion data for which we know the ground truth to evaluate a range of nonlinear dimensionality reduction methods with respect to the quality of motion interpolation. We show that results are extremely sensitive to parameter settings and data set used, but that dimensionality reduction can potentially improve the quality of linear motion interpolation, in particular in the presence of noise.

Credit: NASA, Lunar Eclipse of July 22 over China.

Friday, July 24, 2009

CS: Q/A, Espace Vide, Recovering Signals from Lowpass Data, Sampling with Quasicrystals, a workshop

Here is what I received in my mailbox yesterday:

Dear Dr. Carron,

I have been closely following your webpages on Compressed Sensing....

I have tried the L1-magic toolkit for CS reconstruction of images using Total Variation Quadratic Constraints(TVQC) as this reconstruction is typically better compared other techniques. The toolbox uses Scrambled Fourier matrices for measurements and it is non-trivial to replace the measurement matrices with something else like wavelet etc. I would like to know if you could recommend me of a toolbox other than L1-magic that has solvers like TVQC and could easily be extended to include different measurement matrices.

Also, I would like to know if you could share some code that could be used to extract joint sparsity between two correlated signals measured using Distributed Compressed Sensing framework.


Here is what I responded:

I am glad you find the pages and the blog useful. With regards to TV techniques, there are indeed other solvers using TV and most of them show up when doing a simple search on the blog

http://nuit-blanche.blogspot.com/search?q=TV


with regards to your difficulty in getting to use a different measurement matrix, I think you ought to look at the SPARCO Toolbox and specifically use the very handy measurement matrix framework used there. In particular, you may want to reuse the scripts of the problems 501 and above:

http://www.cs.ubc.ca/labs/scl/sparco/index.php/Main/Problems


All the codes I am listing are on the blog or in the attendant more static pages (the Big Picture in Compressive Sensing or the Local Codes).


While SPARCO is good ground for benchmarks, it also is a good guide for newcomers to see how some of the measurement matrices are set up without having to reinvent the wheel,


Eric Tramel just created a blog called Espace Vide. Entries related to Compressive Sensing are here at:


The title translates from French into void space but it already has two entries on CS:
It does not look void to me :-) Welcome to the blogosphere Eric!

Laurent Duval pointed out to me the following two papers of interest:

Recovering Signals from Lowpass Data by Yonina Eldar and Pohl Volker. The abstract reads:
The problem of recovering a signal from its low frequency components occurs often in practical applications due to the lowpass behavior of many physical systems. Here we study in detail conditions under which a signal can be determined from its low-frequency content. We focus on signals in shift-invariant spaces generated by multiple generators. For these signals, we derive necessary conditions on the cutoff frequency of the lowpass filter as well as necessary and sufficient conditions on the generators such that signal recovery is possible. When the lowpass content is not sufficient to determine the signal, we propose appropriate pre-processing that can improve the reconstruction ability. In particular, we show that modulating the signal with one or more mixing functions prior to lowpass filtering, can ensure the recovery of the signal in many cases, and reduces the necessary bandwidth of the filter.


Image Sampling with Quasicrystals by Mark Grundland, Jiri Patera, Zuzana Masakova, Neil A. Dodgson. The abstract reads:

We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.
We talked about quasicrystals before. Howver in this paper the quasicrystral sampling is performed in the image space whereas in Yves Meyer's work ( Compressed sensing and transference and A variant of compressed sensing ) the sampling is performed in the Fourier space and the reconstruction is performed using a Linear Programming step.


Ron DeVore, Massimo Fornasier, Holger Rauhut are organizing a workshop entitled Workshop Sparsity and Computation at the Hausdorff Center for Mathematics Bonn on June 7-11, 2010. The website:


Thursday, July 23, 2009

CS: Various thresholds for $\ell_1$-optimization in CS, Radial K-t FOCUSS MRI, BBCS algorithm, block-sparse CS, Non Convex CS, OCT

Early this week, a reader of this blog sent me the following:
Dear Igor,

Recently, I encountered one problem about CS. Some guy asked me if the measurement matrix \Phi should be normalized, I said yes. But the interesting thing is that when I came back re-running the program without normalizing of \Phi, it did work!

In this specific unnormalized case, the RIC of \Phi is definitely larger than 1, why does it work then ?

I am a little confused about this, could you provide me some clues? ....

To what I responded
I have seen this too. However, if you take different matrices \phi it will work less often without the normalization than with the normalization. Recall also, the RIP is a sufficient condition (not a necessary one), i.e. there are other matrices that do not fit the RIP but yet allow for recovery using LP techniques.
I realize that this may be a detail to some of you but this is an issue that comes back often. This situation is further compounded by the overuse of the Restricted Isometry Property argument in different engineering papers. A necessary and sufficient condition such as the null space property can be computationally checked for a specific measurement matrix and this should open the door to other type of investigations. I take as an example the paper featured yesterday by David Donoho, Arian Maleki, and Andrea Montanari entitled Message passing algorithms for compressed sensing or the following new paper Various thresholds for $\ell_1$-optimization in compressed sensing by Mihailo Stojnic. The abstract reads:
Recently, \cite{CRT,DonohoPol} theoretically analyzed the success of a polynomial $\ell_1$-optimization algorithm in solving an under-determined system of linear equations. In a large dimensional and statistical context \cite{CRT,DonohoPol} proved that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of non-zero elements of the unknown vector) also proportional to the length of the unknown vector such that $\ell_1$-optimization succeeds in solving the system. In this paper, we provide an alternative performance analysis of $\ell_1$-optimization and obtain the proportionality constants that in certain cases match or improve on the best currently known ones from \cite{DonohoPol,DT}.
A must read as the techniques developed in this paper have the potentiality of being applied to the compressible case and thenon convex l_q optimization case unlike the work of David Donoho and Jared Tanner for instance (see Phase transitions phenomenon in Compressed Sensing)

In a different direction, Jong Chul Ye let me know that the following paper has been accepted: Radial k-t FOCUSS for High-Resolution Cardiac Cine Magnetic Resonance Imaging by Hong Jung, Jaeseok Park, Jaeheung Yoo, and Jong Chul Ye. The abstract reads:

A compressed sensing dynamic magnetic resonance (MR) technique called k-t FOCUSS has been recently proposed. It outperforms the conventional k-t BLAST/SENSE technique by exploiting the sparsity of x-f signals. This paper applies this idea to radial trajectories for high-resolution cardiac cine imaging. Radial trajectories are more suitable for high resolution dynamic MR imaging than Cartesian trajectories since there is smaller trade-off between spatial resolution and number of views if streaking artifacts due to limited views can be resolved. As shown for Cartesian trajectories, k-t FOCUSS algorithm efficiently removes artifacts while preserving high temporal resolution. k-t FOCUSS algorithm applied to radial trajectories is expected to enhance dynamic MR imaging quality. Rather than using an explicit gridding method, which transforms radial k-space sampling data to Cartesian grid prior to applying k-t FOCUSS algorithms, we use implicit gridding during FOCUSS iterations to prevent k-space sampling errors from being propagated. In addition, motion estimation and motion compensation (ME/MC) after the ¯rst FOCUSS iteration were used to further sparsify the residual image. By applying an additional k-t FOCUSS step to the residual image, improved resolution was achieved. In vivo experimental results show that new method can provide high spatio-temporal resolution even from very limited radial data set.

But more importantly, Jong also mentioned
Also, we have recently opened a website for our compressed sensing dynamic MRI, from which people can download source code.

Thank you Jong.

We also have a new reconstruction algorithm, the BBCS algorithm, that seems to be as fast as GPSR. It is described in A Barzilai-Borwein $l_1$-Regularized Least Squares Algorithm for Compressed Sensing by Robert Broughton, Ian Coope, Peter Renaud, Rachael Tappenden. The abstract reads:

Problems in signal processing and medical imaging often lead to calculating sparse solutions to under-determined linear systems. Methodologies for solving this problem are presented as background to the method used in this work where the problem is reformulated as an unconstrained convex optimization problem. The least squares approach is modified by an $l_1$-regularization term. A sparse solution is sought using a Barzilai-Borwein type projection algorithm with an adaptive step length. New insight into the choice of step length is provided through a study of the special structure of the underlying problem. Numerical experiments are conducted and results given, comparing this algorithm with a number of other current algorithms.
[Update: Rachael tells me the code will be up in a month or so ]

Finally, we have another paper by Mihailo Stojnic entitled Block-length dependent thresholds in block-sparse compressed sensing . the abstract reads:
One of the most basic problems in compressed sensing is solving an under-determined system of linear equations. Although this problem seems rather hard certain $\ell_1$-optimization algorithm appears to be very successful in solving it. The recent work of \cite{CRT,DonohoPol} rigorously proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of non-zero elements of the unknown vector) also proportional to the length of the unknown vector such that $\ell_1$-optimization algorithm succeeds in solving the system. In more recent papers \cite{StojnicICASSP09block,StojnicJSTSP09} we considered the setup of the so-called \textbf{block}-sparse unknown vectors. In a large dimensional and statistical context, we determined sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations such that an $\ell_2/\ell_1$-optimization algorithm succeeds in solving the system. The results established in \cite{StojnicICASSP09block,StojnicJSTSP09} assumed a fairly large block-length of the block-sparse vectors. In this paper we consider the block-length to be a parameter of the system. Consequently, we then establish sharp lower bounds on the values of the allowable block-sparsity as functions of the block-length.

Angshul Majumdar released a Matlab toolbox entitled Non Convex Compressed Sensing for Non Gaussian Noise where we have an optimization of the form min ||x||_p subject to ||y-Ax||_q

Finally, Sean O'Connor let me know that he has implemented an algorithm by the name of OCT:

The n-point Walsh Hadamard transform only requires n*LogBase2(n) additions and subtractions. Since it only uses addition and subtraction and since the basis are orthogonal the central limit theorem applies to all its outputs.
You can combine it with random permutations and random sign flipping to do random projections very quickly.

http://code.google.com/p/lemontree/downloads/list
Thanks Sean!

Wednesday, July 22, 2009

CS: Message passing algo for CS, Finite Rate of Innovation, General Deviants, Muthu's presentations, Herschel news ?


Today,we are starting with a new Belief Propagation nonlinear solver. It is presented in Message passing algorithms for compressed sensing by David Donoho, Arian Maleki, and Andrea Montanari. The abstract reads:

Compressed sensing aims to undersample certain high dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization – which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsityundersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism.

We describe a property satisfied by a class of nonlinear systems of equations that are of the form $\F(\Omega)\X=\Y$. Here $\F(\Omega)$ is a matrix that depends on an unknown $K$-dimensional vector $\Omega$, $\X$ is an unknown $K$-dimensional vector and $\Y$ is a vector of $N$ $\ge K$) given measurements. Such equations are encountered in superresolution or sparse signal recovery problems known as ``Finite Rate of Innovation'' signal reconstruction.

We show how this property allows to solve explicitly for the unknowns $\Omega$
and $\X$ by a direct, non-iterative, algorithm that involves the resolution of two linear systems of equations and the extraction of the roots of a polynomial and give examples of problems where this type of solutions has been found useful.

In a previous entry, I introduced General Deviants: An Analysis of Perturbations in Compressed Sensing by Matthew A. Herman and Thomas Strohmer. This paper has gone through a major revision.

Muthu Muthukrishnan just released his presentation slides.

Jean-Luc Starck will make a presentation entitled Compressed Sensing in Astronomy at Saclay on October 20, 2009 at 11:00 am (Bât 141, salle André Berthelot). The summary of the talk is:
We briefly review the concept of Compressed Sensing (CS), the new sampling theory, which is certainly one of the most important discovery in data processing during the last ten years. Indeed, CS provides an alternative to the well-known Shannon sampling theory. Then we will show how some problems in astronomy such interferometric image deconvolution or gammay ray image reconstruction can be handled differently using CS. Finally, we will show how CS could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments onboard the Herschel spacecraft which will launched in April 2009. We show that CS enables to recover data with a spatial resolution enhanced up to 30 per cent with similar sensitivity compared to the averaging technique proposed by ESA.
Shall we see the first results of PACS Herschel Compressive Sensing test ? We'll see.


Image Credit: NASA/JPL/Space Science Institute, Saturn as seen by Cassini on July 20, 2009.

Tuesday, July 21, 2009

CS: A postdoc at Simon Fraser University, Dictionary-based Methods (Sparse Approximation) and Audio Signals, IJCAI-09

Jie Liang just sent me the following announcement for a Postdoc position at Simon Fraser University, Vancouver, Canada. Details: A joint postdoc position is available immediately in the School of Computing Science and the School of Engineering Science, Simon Fraser University, Burnaby (metro-Vancouver area), British Columbia, Canada. Candidates should have a PhD degree (or be near completion) in a relevant field and a strong research and publication record. Areas of interest include overlay/peer-to-peer networking, wireless sensor networking, social networking, image/video coding, cooperative/joint source channel coding, distributed source coding, and compressive sensing.

The duration of the position is 12 months and may be extended an additional 12 months.

Please send your full CV in PDF format to Dr. Jiangchuan Liu (jcliu@cs.sfu.ca) and Dr. Jie Liang (jiel@sfu.ca).

Additional information can be found at


This announcement will be featured shortly on the Compressive Sensing Jobs page.



Bob Sturm will be giving a tutorial on sparse approximation and audio data at the 6th Sound and Music Computing conference in Portugal. The title of the talk is Dictionary-based Methods (Sparse Approximation) and Audio Signals. The attendant slides (with sounds embedded) can be found here. Beware the document is 99 MB large. When I last talked with Bob we did not really have a good sense on how compressive sensing could be applied to audio. In the audio world, sensors of all kinds have been developed and recording large amount of data is not a significant issue as there is a very large pile of money behind it. Besides some of the path shown by Jort Gemmeke for missing data imputation, one wonders in what direction the audio world would allow for a niche to exist and eventually strengthen. The fact that the presentation is 99 MB is a sure sign to me that some improvement can be made. While dictionary learning is a worthy goal, I wonder if compressive sensing would not allow for other type of information to be extracted from signals. For instance, while Blind Signal Separation is an item of intense study, I have not seen anybody use voices signals to infer the physical location where the recording took place in some sort of advanced echolocation technique. Can you imagine if we could see the face of Orson Wells when he was running his War of the Worlds radio show ? Some blinds do it already.



Furthermore, one wonders what is the simple biological basis for a system that allows some moths to jam some bat's radars ?

On a totally different note, the IJCAI-09 papers are located here.

Monday, July 20, 2009

Another Space Odyssey

2001: A Space Odyssey featured a Moon landing on a Moon base. In the real world, the first landing occurred 40 years ago. But we never set up a Moon base.




The movie also featured a journey from a Space Station. In the real world, our current Space Station is international but we do not use it as a way to go to the Moon.


In the movie, the main concern were dark and foreign objects buried on the Moon and on Jupiter. In the real world, we have similar concerns. The next foreign object to hit and be buried on the Moon will be ours and is called LCROSS. A dark object of concern on Jupiter was spotted by Anthony Wesley on 19th July 2009 at 1554UTC from Murrumbateman Australia.


It is a concern because on July 16th, 1994, fifteen years ago to the day, the Shoemaker-Levy 9 comet started hitting Jupiter in what remains the most studied series of comet devastating impacts on a celestial body. What are the probabilities for two comets hitting Jupiter at about the same calendar day fifteen years apart ? My take is that we need to see if we have a similar issue as what happens every year in August with the Perseid showers. If this is the case, our Near Earth Object Program has not detected it.

For what it's worth, I remember where I was on the 25th anniversary of the Moon Landing (July 20th 1994) as the pilot of the KC-135 (vomit comet) commemorated that day by taking us into one more parabola in Lunar gravity after a long two hour flight made up of several weightless periods. This lunar bump was the first a long series lunar flights where we looked at aspects of plumbing that are still unclear to this day. Our unsophisticated approach to basic human needs in space pretty much define to me the wide divide between our dreams and the reality of a Human led Space Odyssey.

[Update, July 21, 2009: I am not the first one to make the connection between this hit and Shoemaker-Levy 9 fifteen years ago, some folks at JPL made that connection, now we just need the people in the right office at JPL to notice]


PS: I wrote a small story about Gene Shoemaker, a scientist, who tried to be an astronaut and who later gave his name to the SL9 comet. Gene is the only person buried on the moon.

CS: MRI, Oil exploration, SLAM radio, Passive radar, OFMD, Downlink scheduling


I am doing some catch-up with the Rice Compressive Sensing repository, here are the papers I have not covered before:

Practical Nonconvex Compressive Sensing Reconstruction of Highly-Accelerated 3D Paralllel MR Angiograms by Joshua Trzasko, Clifton Haider, Armando Manduca. The abstract reads:

In this work, a nonconvex Compressive Sensing model targeted at true 3D reconstructions of highly undersampled MR angiograms acquired with parallel imaging is proposed. When combined with the Max-CAPR acquisition sequence, it is demonstrated that high quality, non-view-shared, 3D images of the contrast-filled neurovasculature can be acquired (at acceleration factors exceeding the number of coils) in just over 2 seconds and reconstructed in as few as 14 minutes on a high-performance workstation.


Here is a somewhat important paper as it examine the l_0 convergence.

Relaxed Conditions for Sparse Signal Recovery with General Concave Priors by Joshua Trzasko, Armando Manduca. The abstract reads:

The emerging theory of Compressive or Compressed Sensing challenges the convention of modern digital signal processing by establishing that exact signal reconstruction is possible for many problems where the sampling rate falls well below the Nyquist limit. Following the landmark works of Candes et al. and Donoho on the performance of ℓ1-minimization models for signal reconstruction, several authors demonstrated that certain nonconvex reconstruction models consistently outperform the convex ℓ1-model in practice at very low sampling rates despite the fact that no global minimum can be theoretically guaranteed. Nevertheless, there has been little theoretical investigation into the performance of these nonconvex models. In this work, a notion of weak signal recoverability is introduced and the performance of nonconvex reconstruction models employing general concave metric priors is investigated under this model. The sufficient conditions for establishing weak signal recoverability are shown to substantially relax as the prior functional is parameterized to more closely resemble the targeted ℓ0-model, offering new insight into the empirical performance of this general class of reconstruction methods. Examples of relaxation trends are shown for several different prior models.

Here is a new way of performing a SLAM like method using radio wave. This is interesting but I wish there was more detail on the strategies used by the emitter and the receiver. I wonder if we could use this sparse sensing approach this with a visual SLAM technique. Here it is: Compressive Cooperative Sensing and Mapping in Mobile Networks by Yasamin Mostofi and Pradeep Sen. The abstract reads:
In this paper we consider a mobile cooperative network that is tasked with building a map of the spatial variations of a parameter of interest, such as an obstacle map or an aerial map. We propose a new framework that allows the nodes to build a map of the parameter of interest with a small number of measurements. By using the recent results in the area of compressive sensing, we show how the nodes can exploit the sparse representation of the parameter of interest in the transform domain in order to build a map with minimal sensing. The proposed work allows the nodes to efficiently map the areas that are not sensed directly. To illustrate the performance of the proposed framework, we show how the nodes can build an aerial map or a map of obstacles with sparse sensing. We furthermore show how our proposed framework enables a novel non-invasive approach to mapping obstacles by using wireless channel measurements


Felix Herrmann and his group at UBC produced a new series of papers on inverse problems as found in the oil industry: Compressive imaging by wavefield inversion with group sparsity by Felix Herrmann. The abstract reads:

Migration relies on multi-dimensional correlations between source- and residual wavefields. These multi-dimensional correlations are computationally expensive because they involve operations with explicit and full matrices that contain both wavefields. By leveraging recent insights from compressive sampling, we present an alternative method where linear correlation-based imaging is replaced by imaging via multidimensional deconvolutions of compressibly sampled wavefields. Even though this approach goes at the expense of having to solve a sparsity-promotion recovery program for the image, our wavefield inversion approach has the advantage of reducing the system size in accordance to transform-domain sparsity of the image. Because seismic images also exhibit a focusing of the energy towards zero offset, the compressive-wavefield inversion itself is carried out using a recent extension of one-norm solver technology towards matrix-valued problems. These so-called hybrid (1, 2)-norm solvers allow us to penalize pre-stack energy away from zero offset while exploiting joint sparsity amongst near-offset images. Contrary to earlier work to reduce modeling and imaging costs through random phase-encoded sources, our method compressively samples wavefields in model space. This approach has several advantages amongst which improved system-size reduction, and more flexibility during subsequent inversions for subsurface properties.


Compressive-wavefield simulations by Felix Herrmann, Yogi Erlangga and Tim Lin. The abstract reads:
Full-waveform inversion’s high demand on computational resources forms, along with the non-uniqueness problem, the major impediment withstanding its widespread use on industrial-size datasets. Turning modeling and inversion into a compressive sensing problem—where simulated data are recovered from a relatively small number of independent simultaneous sources—can effectively mitigate this high-cost impediment. The key is in showing that we can design a sub-sampling operator that commutes with the time-harmonic Helmholtz system. As in compressive sensing, this leads to a reduction in simulation cost. Moreover, this reduction is commensurate with the transform-domain sparsity of the solution, implying that computational costs are no longer determined by the size of the discretization but by transform-domain sparsity of the solution of the CS problem which forms our data. The combination of this sub-sampling strategy with our recent work on implicit solvers for the Helmholtz equation provides a viable alternative to full-waveform inversion schemes based on explicit finite-difference methods.


Sub-Nyquist sampling and sparsity: getting more information from fewer samples by Felix Herrmann. The abstract reads:
Seismic exploration relies on the collection of massive data volumes that are subsequently mined for information during seismic processing. While this approach has been extremely successful in the past, the current trend of incessantly pushing for higher quality images in increasingly complicated regions of the Earth continues to reveal fundamental shortcomings in our workflows to handle massive high-dimensional data volumes. Two causes can be identified as the main culprits responsible for this barrier. First, there is the so-called “curse of dimensionality” exemplified by Nyquist’s sampling criterion, which puts disproportionate strain on current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. Secondly, there is the recent “departure from Moore’s law” that forces us to lower our expectations to compute ourselves out of this curse of dimensionality. In this paper, we offer a way out of this situation by a deliberate randomized subsampling combined with structure-exploiting transform-domain sparsity promotion. Our approach is successful because it reduces the size of seismic data volumes without loss of information. Because of this size reduction both impediments are removed and we end up with a new technology where the costs of acquisition and processing are no longer dictated by the size of the acquisition but by the transform-domain sparsity of the end-product after processing.

Higher dimensional blue-noise sampling schemes for curvelet-based seismic data recovery by Gang Tang, Reza Shahidi, Felix Herrmann, Jianwei Ma. The abstract reads:
In combination with compressive sensing, a successful reconstruction scheme called Curvelet-based Recovery by Sparsity promoting Inversion (CRSI) has been developed, and has proven to be useful for seismic data processing. One of the most important issues for CRSI is the sampling scheme, which can greatly affect the quality of reconstruction. Unlike usual regular undersampling, stochastic sampling can convert aliases to easy-to-eliminate noise. Some stochastic sampling methods have been developed for CRSI, e.g. jittered sampling, however most have only been applied to 1D sampling along a line. Seismic datasets are usually higher dimensional and very large, thus it is desirable and often necessary to develop higher dimensional sampling methods to deal with these data. For dimensions higher than one, few results have been reported, except uniform random sampling, which does not perform well. In the present paper, we explore 2D sampling methodologies for curvelet-based reconstruction, possessing sampling spectra with blue noise characteristics, such as Poisson Disk sampling, Farthest Point Sampling, and the 2D extension of jittered sampling. These sampling methods are shown to lead to better recovery and results are compared to the other more traditional sampling protocols.


Unified compressive sensing framework for simultaneous acquisition with primary estimation by Tim Lin, Felix Herrmann. The abstract reads:
The central promise of simultaneous acquisition is a vastly improved crew efficiency during acquisition at the cost of additional post-processing to obtain conventional source-separated data volumes. Using recent theories from the field of compressive sensing, we present a way to systematically model the effects of simultaneous acquisition. Our formulation form a new framework in the study of acquisition design and naturally leads to an inversion-based approach for the separation of shot records. Furthermore, we show how other inversion-based methods, such as a recently proposed method from van Groenestijn and Verschuur (2009) for primary estimation, can be processed together with the demultiplexing problem to achieve a better result compared to a separate treatment of these problems.


Sparse Channel Estimation for Mutlicarrier Underwater Acoustic Communications: From Subspace Methods to Compressed Sensing by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
In this paper, we present various channel estimators that exploit the channel sparsity in a multicarrier underwater acoustic system, including subspace algorithms from the array precessing literature, namely root-MUSIC and ESPRIT, and recent compressed sensing algorithms in form of Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP). Numerical simulation and experimental data of an OFDM block-by-block receiver are used to evaluate the proposed algorithms in comparison to the conventional least-squares (LS) channel estimator. We observe that subspace methods can tolerate small to moderate Doppler effects, and outperform the LS approach when the channel is indeed sparse. On the other hand, compressed sensing algorithms uniformly outperform the LS and subspace methods. Coupled with a channel equalizer mitigating intercarrier interference, the compressed sensing algorithms can handle channels with significant Doppler spread.

Sparse Channel Estimation for OFDM: Over-Complete Dictionaries and Super-Resolution Methods by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
Wireless multipath channels can often be characterized as sparse, i.e., the number of significant paths is small even when the channel delay spread is large. This can be taken advantage of when estimating the unknown channel frequency response using pilot assisted modulation. Other work has largely focused on the greedy orthogonal matching pursuit (OMP) algorithm, using a dictionary based on an equivalent finite impulse response filter to model the channel. This is not necessarily realistic, as the physical nature of the channel is continuous in time, while the equivalent filter taps are based on baseband sampling. In this paper, we consider sparse channel estimation using a continuous time path-based channel model. This can be linked to the direction finding problem from the array processing literature and solved using the well-known root-MUSIC and ESPRIT algorithms, which have no formal time resolution. In addition, we show that a dictionary with finer time resolution considerably improves the performance of OMP and the related Basis Pursuit (BP) algorithm.


Signal Extraction Using Compressed Sensing for Passive Radar with OFDM Signals by Christian R. Berger, Shengli Zhou, Peter Willett. The abstract reads:
Passive radar is a concept where possibly multiple non-cooperative illuminators are used in a multi-static setup. A freely available signal, like radio or television, is decoded and used to identify moving airborne targets based on their Doppler shift. New digital signals, like Digital Audio/Video Broadcast (DAB/DVB), are excellent candidates for this scheme, as they are widely available, can be easily decoded, and employ orthogonal frequency division multiplex (OFDM), a multicarrier transmission scheme based on channel equalization in the frequency domain using the Fast Fourier Transform (FFT). After successfully decoding the digital broadcast, the channel estimates can be used to estimate targets’ bi-static range and range-rate by separating different multi-path components by their delay and Doppler shift. While previous schemes have simply projected available measurements onto possible Doppler shifts, we employ Compressed Sensing, a type of sparse estimation. This way we can enhance separation between targets, and by-pass additional signal processing necessary to determine the actual target within a “blotch” of signal energy smeared across different delays and Doppler frequencies.


Compressed Sensing for OFDM/MIMO Radar by Christian R. Berger, Shengli Zhou, Peter Willett, Bruno Demissie, and Jorg Heckenbach. The asbtract reads:
In passive radar, two main challenges are: mitigating the direct blast, since the illuminators broadcast continuously, and achieving a large enough integration gain to detect targets. While the first has to be solved in part in the analog part of the processing chain, due to the huge difference of signal strength between the direct blast and weak target reflections, the second is about combining enough signal efficiently, while not sacrificing too much performance. When combining this setup with digital multicarrier waveforms like orthogonal frequency division multiplex (OFDM) in digital audio/video broadcast (DAB/DVB), this problem can be seen to be a version of multiple-input multiple output (MIMO) radar. We start with an existing approach, based on efficient fast Fourier transform (FFT) operation to detect target signatures, and show how this approach is related to a standard matched filter approach based on a piece-wise constant approximation of the phase rotation caused by Doppler shift. We then suggest two more applicable algorithms, one based on subspace processing and one based on sparse estimation. We compare these various approaches based on a detailed simulation scenario with two closing targets and experimental data recorded from a DAB network in Germany.

Downlink Scheduling Using Compressed Sensing by Sibi Raj Bhaskaran, Linda Davis, Alex Grant, Stephen Hanly, Paul Tune. The asbtract reads:

We propose a novel access technique for cellular downlink resource sharing. In particular, a distributed self selection procedure is combined with the technique of compressed sensing to identify a set of users who are getting simultaneous access to the downlink broadcast channel. The performance of the proposed method is analyzed, and its suitability as an alternate access mechanism is argued.


That's all for today.

Saturday, July 18, 2009

CS: CFP for Advances in Multirate Filter Bank Structures and Multiscale Representations


Laurent Duval has a call for paper out for a special issue on Advances in Multirate Filter Bank Structures and Multiscale Representations.


Here is the announcement:

A century after the first outbreak of wavelets in Alfred Haar's thesis in 1909, and twenty years after the advent of Multiresolution Analysis, filter banks and wavelet transforms lie at the heart of many digital signal processing and communication systems. During the last thirty years, they have been the focus of tremendous theoretical advances and practical applications in a growing digital world. They are for instance present, as local linear expansions, at the core of many existing or forthcoming audio, image or video compression algorithms.

Beyond standards, many exciting developments have emerged in filter banks and wavelets from the confrontation between scientists from different fields (including signal and image processing, computer science, harmonic analysis, approximation theory, statistics, bioengineering, physics,...). At their confluence, multiscale representations of data, associated with their efficient processing in a multirate manner, have unveiled tools or refreshed methods impacting the whole data management process, from acquisition to interpretation, through communications, recovery and visualization. Multirate structures naturally shelter key concepts such as the duality between redundancy and sparsity, as well as means for extracting low dimensional structures from higher ones. In image processing in particular, various extensions of wavelets provide smart linear tools for building insightful geometrical representations of natural images.

The purpose of this special issue is to report on recent progresses performed and emerging trends in the domain of multirate filter banks and multiscale representations of signals and images. Answers to the challenge of handling an increasing demand of information extraction and processing from large data sets will be explored.

**Topics (not exclusive)**
Sampling theory, compressive sensing . Sparse representations . Multiscale models . Multiscale processing: interpolation, inpainting, restoration . Wavelet shrinkage and denoising . Oversampled filter banks, discrete frames . Rational and non-uniform multirate systems . Directional, steerable filter banks and wavelets . Nonlinear filter banks . (Multidimensional) filter bank design and optimization . Hybrid analog/digital filter banks . Fast and low-power schemes (lifting, integer design) . Multiscale and multirate applications to source and channel coding, equalization, adaptive filtering,...

**Important dates**
Deadline for submission: 15 December 2009.
First round of reviews/decisions: 1 April 2010.
Resubmission of revised papers: 1 July 2010

Submission Guidelines for authors can be found at
http://www.elsevier.com/locate/sigpro
Authors should submit their manuscripts before the submission deadline via
http://ees.elsevier.com/sigpro/
selecting "Special Issue: Multirate/ Multiscale" as Article Type.

**Guest editors**
Prof. Thierry Blu Chinese University of Hong Kong, Shatin, New Territories, Hong Kong.
Dr. Laurent Duval, IFP, France.
Prof. Truong Q. Nguyen University of California, San Diego, USA.
Prof. Jean-Christophe Pesquet Universite Paris-Est, France.

I'll add the deadlines shortly to the compressive sensing calendar.

Credit: NASA/GSFC/Arizona State University, Apollo 14 artifacts, 40 years later as seen by the camera on board the LRO spacecraft. You can see the astronauts footprints!

Friday, July 17, 2009

CS: CS for radio interferometry, Compressive Holography, SPIR-iT, CS in Multi-hop Sensor Networks, Split Bregman Iteration for CS





Today, there are several papers I found thanks to different blogs. This is great.

We consider the problem of reconstruction of astrophysical signals probed by radio interferometers with baselines bearing a non-negligible component in the pointing direction. The visibilities measured essentially identify with a noisy and incomplete Fourier coverage of the product of the planar signals with a linear chirp modulation. We analyze the related spread spectrum phenomenon and suggest its universality relative to the sparsity dictionary, in terms of the achievable quality of reconstruction through the Basis Pursuit problem. The present manuscript represents a summary of recent work.

Compressive Holography by David Brady, Kerkil Choi, Daniel L. Marks, Ryoichi Horisaki and Sehoon Lim. The abstract reads:

Compressive sampling enables signal reconstruction using less than one measurement per reconstructed signal value. Compressive measurement is particularly useful in generating multidimensional images from lower dimensional data. We demonstrate single frame 3D tomography from 2D holographic data.

David mentions this publication and another one on his blog (Multiscale lens design and compressive holography). Andrew Portnoy one of David's students has just released his Ph.D thesis entitled: Coded Measurement For Imaging and Spectroscopy. The abstract reads:
This thesis describes three computational optical systems and their underlying coding strategies. These codes are useful in a variety of optical imaging and spectroscopic applications. Two multichannel cameras are described. They both use a lenslet array to generate multiple copies of a scene on the detector. Digital processing combines the measured data into a single image. The visible system uses focal plane coding, and the long wave infrared (LWIR) system uses shift coding. With proper calibration, the multichannel interpolation results recover contrast for targets at frequencies beyond the aliasing limit of the individual subimages. This theses also describes a LWIR imaging system that simultaneously measures four wavelength channels each with narrow bandwidth. In this system, lenses, aperture masks, and dispersive optics implement a spatially varying spectral code.
David also has another thought provoking blog entry entitled: The coming data storm

In a different direction, we have SPIR-iT: Iterative Self Consistent Parallel Imaging Reconstruction from Arbitrary k-Space by Michael Lustig and John Pauly. The abstract reads:
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off resonance correction and regularization terms. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on projection over convex sets (POCS) and conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and non-linear regularization are also demonstrated.

We propose energy-efficient compressed sensing for wireless sensor networks using spatially-localized sparse projections. To keep the transmission cost for each measurement low, we obtain measurements from clusters of adjacent sensors. With localized projection, we show that joint reconstruction provides significantly better reconstruction than independent reconstruction. We also propose a metric of energy overlap between clusters and basis functions that allows us to characterize the gains of joint reconstruction for different basis functions. Compared with state of the art compressed sensing techniques for sensor network, our simulation results demonstrate significant gains in reconstruction accuracy and transmission cost.
Following up Lianlin Li's blog on Compressive Sensing there is:

Curvelet-Wavelet Regularized Split Bregman Iteration for Compressed Sensing by Gerlind Plonka and Jianwei Ma. The abstract reads:
Compressed sensing is a new concept in signal processing. Assuming that a signal can be represented or approximated by only a few suitably chosen terms in a frame expansion, compressed sensing allows to recover this signal from much fewer samples than the Shannon-Nyquist theory requires. Many images can be sparsely approximated in expansions of suitable frames as wavelets, curvelets, wave atoms and others. But such frames can sparsely represent special structures in images differently well. For a suitable recovery of images, we propose new models that contain weighted sparsity constraints in two different frames. Given the incomplete measurements f = Φu + \epsilon with the measurement matrix Φ element of RK×N, K \lt N, we consider a constrained optimization problem of the form argmin u,ϑc,ϑw {Λcϑc1 + Λwϑw1 + 1 2f − Φu22} such that ϑc = Ψcu, ϑw = Ψwu (“denoising model”). Here Ψc and Ψw are the transform matrices corresponding to the two frames, and the diagonal matrices Λc, Λw contain the weights for the frame coefficients. Alternatively, if K  N, and if the noise  is negligibly small, we consider the “reconstruction model” argmin u,ϑc,ϑw {Λcϑc1 + Λwϑw1} such that there exists a vector u e;ement of RN with f = Φu, ϑc = Ψcu, and ϑw = Ψwu. We present efficient iteration methods to solve these optimization problems, based on Alternating Split Bregman algorithms. The convergence of these iteration schemes will be proved by showing that they can be understood as special cases of the Douglas-Rachford Split algorithm. Numerical experiments for compressed sensing based Fourier domain random imaging show good performances of the proposed curvelet-wavelet regularized split Bregman (CWSpB) methods, where we particularly use a combination of wavelet and curvelet coefficients as sparsity constraints.

Finally, Marco Duarte just got an IPAM Postdoc. Good luck Marco !


Credit: NASA via the Bad Astronomy blog. The tapes of the Apollo 11 landing that occured 40 years ago.