Tuesday, August 31, 2010

CS: The long post of the week (Q&A, Blog Entries, Book, Preprints)



On LinkedIn we have the following questions:


Maybe Tanish's question can be answered in the Theory CS Q&A,

On MetaOptimize Q&A, there is a long question on Compressed Sensing of Communities in a Network

Some blog entries got my attention recently:

Djalil

Alex

Andrew
Bob

in the last entry one can read:

Martin Vetterli gave the most humorous talk, and at the same time trumpeted a very important message: Reproducible Research. If we as scientists put in a little extra effort to make the results and algorithms described in our publications clearly available, either by sufficient description, or by downloadable software, and if we make meaningful and sufficient comparisons with other algorithms where appropriate, then we will simultaneously improve the quality of our work, and increase our achievements. As an added bonus, the evidence suggests that research made reproducible is cited more, and more quickly than research that is not; and that once you have started making your research reproducible, the time it takes you the next time decreases. The idea isn't all roses though, as the vast majority of reproducible research is in the form of MATLAB code. I am lucky to have an institution license, but many others are not. There is also the fact that one day the various websites with code that we point to in our papers will not exist any longer.
Highlights are mine. With this state of the affairs and issues like the ones mentioned  in Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?. by Xiaochuan Pan, Emil Sidky and Michael Vannier or the inability to run the wonderful  numerical tours of Gabriel Peyre if you don't have a matlab licence or cannot install Scilab (on the iPad for instance). I am wondering and probably starting to implement a solution that might be answering some of these issues...Stay tuned....

In a different direction, a book entitled:GPU Computing Gems Edited by Wen-mei W Hwu is coming out in December and will feature a chapter on compressed sensing reconstruction:
Chapter 45: ℓ1 Minimization in ℓ1-SPIRiT Compressed Sensing MRI Reconstruction

I have added it to the Nuit Blanche - Amazon Store in case you want to order it.

Here is a list of papers that showed up on my radar screen for your enjoyment:

Hard thresholding pursuit: an algorithm for Compressive Sensing by Simon Foucart. The abstract reads:
We introduce a new iterative algorithm to find sparse solutions of underdetermined linear systems. The algorithm, a simple combination of the Iterative Hard Thresholding algorithm and of the Compressive Sampling Matching Pursuit or Subspace Pursuit algorithms, is called Hard Thresholding Pursuit. We study its general convergence, and notice in particular that only a finite number of iterations are required. We then show that, under a certain condition on the restricted isometry constant of the matrix of the system, the Hard Thresholding Pursuit algorithm indeed finds all s-sparse solutions. This condition, which reads \delta_3s < 1=p3, is heuristically better than the sufficient conditions currently available for other Compressive Sensing algorithms. It applies to fast versions of the algorithm, too, including the Iterative Hard Thresholding algorithm. Stability with respect to sparsity defect and robustness with respect to measurement error are also guaranteed under the condition \delta_3s < 1=p3. We conclude with some numerical experiments to demonstrate the good empirical performance and the low complexity of the Hard Thresholding Pursuit algorithm.


Explicit constructions of RIP matrices and related problems by Jean Bourgain, Stephen J. Dilworth, Kevin Ford, Sergei Konyagin and Denka Kutzarova.. The abstract reads:
We give a new explicit construction of $n imes N$ matrices satisfying the Restricted Isometry Property (RIP). Namely, for some c \gt 0, large N and any n satisfying N^{1-c} \lt n \lt N, we construct RIP matrices of order k^{1/2+c}. This overcomes the natural barrier k=O(n^{1/2}) for proofs based on small coherence, which are used in all previous explicit constructions of RIP matrices. Key ingredients in our proof are new estimates for sumsets in product sets and for exponential sums with the products of sets possessing special additive structure. We also give a construction of sets of n complex numbers whose k-th moments are uniformly small for 1le kle N (Turan's power sum problem), which improves upon known explicit constructions when (log N)^{1+o(1)} le nle (log N)^{4+o(1)}. This latter construction produces elementary explicit examples of n by N matrices that satisfy RIP and whose columns constitute a new spherical code; for those problems the parameters closely match those of existing constructions in the range (log N)^{1+o(1)} le nle (log N)^{5/2+o(1)}.

Spectrum sensing, which aims at detecting spectrum holes, is the precondition for the implementation of cognitive radio (CR). Collaborative spectrum sensing among the cognitive radio nodes is expected to improve the ability of checking complete spectrum usage. Due to hardware limitations, each cognitive radio node can only sense a relatively narrow band of radio spectrum. Consequently, the available channel sensing information is far from being sufficient for precisely recognizing the wide range of unoccupied channels. Aiming at breaking this bottleneck, we propose to apply matrix completion and joint sparsity recovery to reduce sensing and transmitting requirements and improve sensing results. Specifically, equipped with a frequency selective filter, each cognitive radio node senses linear combinations of multiple channel information and reports them to the fusion center, where occupied channels are then decoded from the reports by using novel matrix completion and joint sparsity recovery algorithms. As a result, the number of reports sent from the CRs to the fusion center is significantly reduced. We propose two decoding approaches, one based on matrix completion and the other based on joint sparsity recovery, both of which allow exact recovery from incomplete reports. The numerical results validate the effectiveness and robustness of our approaches. In particular, in small-scale networks, the matrix completion approach achieves exact channel detection with a number of samples no more than 50% of the number of channels in the network, while joint sparsity recovery achieves similar performance in large-scale networks.


Wireless Tomography, Part III: Compressed Sensing for Ultra-wideband Signals by Peng Zhang, Robert C. Qiu. The abstract reads:
This is Part III of the wireless tomography three paper series.Wireless tomography is related to wireless communications in that it requires the channel recovery between different waveforms at transmit and receive, as well as multiple-input multiple-output (MIMO) communication system. According to the pulse propagation mechanisms of reflection and diffraction, ultra-wideband (UWB) waveforms suffer pulse distortion. Distorted pulses will overlap and therefore increases the sampling rate for accurate UWB channel recovery. Thanks to the recent progresses in sampling theory and radio propagation theory, we are able to propose a compressed sensing (CS) based UWB channel recovery method considering pulse distortion. The concept has been demonstrated through simulations. The sampling rate is as low as 2 Gsps, compared with the Nyquist rate of 50 Gsps. A CS based 2 × 2 MIMO communication system is also proposed and simulated. The communication problem is modeled as CS problem, and further reduce sampling rate required at the receiver.

Acceleration of Randomized Kaczmarz Method via the Johnson-Lindenstrauss Lemma by Yonina Eldar, and Deanna Needell. The abstract reads: 
The Kaczmarz method is an algorithm for finding the solution to an overdetermined system of linear equations Ax = b by iteratively projecting onto the solution spaces. The randomized version put forth by Strohmer and Vershynin yields provably exponential convergence in expectation, which for highly overdetermined systems even outperforms the conjugate gradient method. In this article we present a modified version of the randomized Kaczmarz method which at each iteration selects the optimal projection from a randomly chosen set, which in most cases significantly improves the convergence rate. We utilize a Johnson-Lindenstrauss dimension reduction technique to keep the runtime on the same order as the original randomized version, adding only extra preprocessing time. We present a series of empirical studies which demonstrate the remarkable acceleration in convergence to the solution using this modified approach.
The requirement for real-time processing indeed poses challenges on implementing spectrum sensing algorithms. Trade-off between the complexity and the effectiveness of spectrum sensing algorithms should be taken into consideration. In this paper, a fast Fourier transform based spectrum sensing algorithm, whose decision variable is independent of noise level, is introduced. A small form factor software defined radio development platform is employed to implement a spectrum sensing receiver with the proposed algorithm. To our best knowledge, it is the first time that real-time spectrum sensing on hardware platform with controllable primary user devices is demonstrated.
Airborne Radar STAP using Sparse Recovery of Clutter Spectrum by Ke Sun, Hao Zhang, Gang Li, Huadong Meng, Xiqin Wang. The abstract reads:
Space-time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Statistical-based STAP methods generally need sufficient statistically independent and identically distributed (IID) training data to estimate the clutter characteristics. However, most actual clutter scenarios appear only locally stationary and lack sufficient IID training data. In this paper, by exploiting the intrinsic sparsity of the clutter distribution in the angle-Doppler domain, a new STAP algorithm called SR-STAP is proposed, which uses the technique of sparse recovery to estimate the clutter space-time spectrum. Joint sparse recovery with several training samples is also used to improve the estimation performance. Finally, an effective clutter covariance matrix (CCM) estimate and the corresponding STAP filter are designed based on the estimated clutter spectrum. Both the Mountaintop data and simulated experiments have illustrated the fast convergence rate of this approach. Moreover, SR-STAP is less dependent on prior knowledge, so it is more robust to the mismatch in the prior knowledge than knowledge-based STAP methods. Due to these advantages, SR-STAP has great potential for application in actual clutter scenarios.

Sunday, August 29, 2010

SPARS'11, Latent Variable Analysis / Signal Separation, Sparsity and Learning

 Jared Tanner asked me to post this, so here it is:
The Edinburgh Compressed Sensing Group will host the next edition of SPARS, SPARS'11, which will be the fourth edition of the meeting dedicated to all things sparse.  The formal title for the series is
Workshop: Signal Processing with Adaptive Sparse Structured Representations

The even will be held 27-30 June 2011, located in Edinburgh Scotland.  A preliminary website for the event is:

In addition to the above eight plenary speakers we welcome all working on sparsity related topics to join us and present their work. We will have slots available for about 70 speakers and a poster session for those that cannot be accommodated in the talk schedule. Speakers will be selected based upon submitted one page extended abstracts, more details on this to appear on the webpage in the near future.  The event is subsidised by the EPSRC, LMS, ICMS, and ECoS. Registration fees will be no more than 200 GBP for faculty and 100 GBP for students, with this including a conference dinner and reception. Numerous early career researchers, primarily PhD students, will have the registration fee waived and local accommodations provided for.

We hope to welcome all in the community to Edinburgh next summer.

On behalf of the organizers:
Coralia Cartis
Mike Davies
Jared Tanner
Thanks Jared  I am sure Jared will also provide the dates for abstract submission soon as well.

And let's note forget the two other meetings I mentioned yesterday as well namely:

If y'all have a website and want to advertize your meeting on Nuit Blanche as that meeting pertains to an area of compressed sensing. I don't mind if you point back to Nuit Blanche in the "links" section of your website.

Saturday, August 28, 2010

CS: Two Meetings, Generalized sampling using a compound-eye imaging system, Compressive Geospatial Sensing, Polya Prize

Remi Gribonval sent me some information about some meetings of potential interest to readers of this blog:

The upcoming conference LVA on Latent Variable Analysis / Signal Separation, to be held in Saint-Malo, Britanny, France, will feature (among other):

More details can be found at http://lva2010.inria.fr/

A one day workshop on Sparsity and Learning (Apprentissage et Parcimonie), co-organized by Francis Bach and myself within the french network GDR ISIS, will feature invited lectures by: Stéphane Mallat, Guillaume Obozinski, Matthieu Kowalski, Jean-Christophe Pesquet, Erwan Lepennec, Cedric Fevotte, Alexandre Tsybakov, and Rémi Munos. More details to be found at at http://gdr-isis.org/rilk/gdr/ReunionListe-540
Thanks Remi.

In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is sheared and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.

The SIAM George Pólya Prize was awarded to Emmanuel Candès and Terence Tao. From the press release:

....The award recognizes their role in developing the theory of compressed sensing and matrix completion, which enables efficient reconstruction of sparse, high-dimensional data based on very few measurements. According to the selection committee, the algorithms and analysis are not only beautiful mathematics, worthy of study for their own sake, but they also lead to remarkable solutions of practical engineering problems...

Wednesday, August 25, 2010

CS: Sudoku using POCS and Sparsity, Theoretical CS Q&A, "they had no idea", some meetings.

Gabriel Peyré just let me know that he and Yue Lu implemented a new numerical tour featuring Sudoku using POCS and Sparsity. From the website:

This numerical tour explores the use of numerical schemes to solve the Sudoku game.

Contents

This tour was written by Yue M. Lu and Gabriel Peyré.

I'll feature this example in the Compressed Sensing games page. One of the things that came out of the discussion with Gabriel was that there is a set of problems that cannot seem to be solved by using the L1 /sparsity or the POCS solution. Unbeknownst to me, there are difficult Sudoku problems and it looks like that sparsity alone is not a good enough a constraint to provide a solution for some problems. such as the AL Escargot. I learn something everyday.  If you recall Gabriel has released two other examples of compressive sensing already in his  numerical tours:

Suresh let me know on Twitter that there is some interesting activity on the Theoretical Computer Science Q&A website on Compressed Sensing, and surely enough one of the best answers so far is that of Piotr Indyk.  From now on, my daily search for stuff on compressed sensing will include this Q&A.

You probably recall this story of the Kodak engineers who acknowledged they had no idea about the usefulness of the first digital camera. Well Dick Gordon sent me the following as a comment:

Sunday, August 22, 2010 1:39 AM from Winnipeg
Re:  the Kodak engineers produced the first digital camera...
Maybe a line from the technical report written at the time sums it up best: “The camera described in this report represents a first attempt demonstrating a photographic system which may, with improvements in technology, substantially impact the way pictures will be taken in the future"....But in reality, we had no idea.”

Nonsense. I had written them a letter (in the days before e-mail!) suggesting that they go one step further, mosaicing focal planes to select in-focus pixels and produce the first camera that could not take an out of focus picture. Got no reply. The other obvious problem was lack of self-confidence at Kodak: they made a digital back for a Nikon camera. Nikon is still around, and Kodak is just about dead. Moribund leadership for a once great company, I suspect.

I know a fellow at University of Manitoba who still has this Kodak unit.
Yours, -Dick Gordon gordonr@cc.umanitoba.ca 

In a different direction, somebody asked me to provide a hyperlink linking his/her name to his/her webpage. Please, if your name appears with no hyperlink or to the wrong one, let me know and I'll try to correct it asap. To give you an idea as to why it is important to be linked from Nuit Blanche, I usually receive some spam requests asking to include links on recent entries for free. I also get some amount of spam from people trying to use the comment section to l.ink back to their "content". Recently however, something new happened, I received an email  from a gentleman who proposed to pay a hundred dollars to plant a link in an old entry of this blog... this is interesting on  so many levels. With no further due and using you, the readers as filters, here is a set of meetings that are probably of interest to you: Pierre Vandergheynst let me know of a meeting in Marseille mostly in French on signal processing in biology. It is here. Mark Plumbley also let me know of the INSPIRE 2010 Conference in London.
INSPIRE 2010 Conference on information representation and estimation
Date, Time & Location

University College London

Monday 06/09/2010 - Wednesday 08/09/2010

09:00-18:00

Description

Mathematical methods for signal processing and in general for data representation and inference are growing more and more sophisticated. Successful applications of such methods range from medical imaging to security. Developments in mathematics and signal processing are however often divergent. Therefore, the main aim of this conference is to bring together signal processing researchers with statisticians and mathematicians working on problems related to data modelling and estimation, to encourage the exchange of ideas as well as to discuss theoretical underpinning of the methods and domains of application.

The INSPIRE 2010 conference will be held at the Anatomy JZ Young LT at University College London from September 6 till September 8, 2010.
Please visit the conference website to register: http://www.network-inspire.org/events/view/3

The Programme is available at the following link: http://www.commsp.ee.ic.ac.uk/~pld/programmeINSPIRE2010.pdf

Contact Professor Sofia Olhede

Tuesday, August 24, 2010

CS: Sparse Brain Network Recovery under Compressed Sensing, Compressive Radar Imaging Using White Stochastic Waveforms, Compressive Sensing in the Limit

Wow! just Wow! We have two very application oriented interesting papers today:

Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding e ffects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the small-n large-p situation, a sparseness constraint is generally introduced. In this paper, we consider the sparse linear regression model with a l1-norm penalty, a.k.a., least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity. LASSO is a well-known decoding algorithm in the compressed sensing (CS). The CS theory states that LASSO can reconstruct the exact sparse signal even from a small set of noisy measurements. We briefly show that the penalized linear regression for partial correlation estimation is related with CS. It opens a new possibility that the proposed framework can be used for a sparse brain network recovery. As an illustration, we construct sparse brain networks of 97 regions of interest (ROIs) obtained from FDG-PET data for the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. As a model validation, we check their reproducibilities by leave-one-out cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon.

I don't think I have this type of quality work in Autism related studies. Kudos to this team. On a related note,  if you are interested in going for a PhD that deals with how compressed sensing translates into medical application, you may want to check the PhD program at King's College in London. Check the title of Dr. Batchelor. More information can be found here.

In a different direction, we get to use the Donoho-Tanner phase transition to calibrate a hardware system. I like very much this idea:



In this paper, we apply the principles of compressive sampling to ultra-wideband (UWB) stochastic waveform radar. The theory of compressive sampling says that it is possible to recover a signal that is parsimonious when represented in a particular basis, by acquiring few projections on to an appropriate basis set. Drawing on literature in compressive sampling, we develop the theory behind stochastic waveformbased compressive imaging. We show that using stochastic waveforms for radar imaging, it is possible to estimate target parameters and detect targets by sampling at a rate that is considerably slower than the Nyquist rate and recovering using compressive sensing algorithms. Thus, it is theoretically possible to increase the bandwidth (and hence the spatial resolution) of an ultra-wideband radar system using stochastic waveforms, without significant additions to the data acquisition system. Further, there is virtually no degradation in the performance of a UWB stochastic waveform radar system that employs compressive sampling. We present numerical simulations to show that the performance guarantees provided by theoretical results are achieved in realistic scenarios.

Study the asymptotic behaviour of Node-Based Verification-Based (NBVB) algorithms over random regular bipartite graphs in the context of compressive sensing.

Monday, August 23, 2010

What compressed sensing is not ?

From this outstanding description of Yves Meyer's work, one can find this annoying little nugget:

Depuis peu, il s’intéresse au « compressed sensing », nouvelle technique de traitement du signal, qui a apporté des résultats spectaculaires en débruitage d’image.

I mean, which part of "compressed" or "sensing" applies to image denoising ? Seriously, people haven't we passed that stage yet ? I realize that compressed sensing pushed for better and faster reconstruction solvers but saying this is the only thing it has brought to the world...c'est un peu court? For our mathematician friends here is a list of the different hardware that have been built with the intent of using the intrinsic sparsity found in Nature.
For more on why I believe Yves Meyer's work is important, check Saturday's entry.

Sunday, August 22, 2010

CS: The real long post of the week.

Two blogs mentioned compressed sensing this past week:
I was reading the following piece on how Kodak engineers produced the first digital camera. I liked the ending of that article as it parallels some thoughts I sometimes have on compressed sensing (and I am sure I am not the only one)
Many developments have happened between this early work and today. Personal computers, the Internet, wide bandwidth connections and personal desktop photographic printing are just a few of these. It is funny now to look back on this project and realize that we were not really thinking of this as the world’s first digital camera. We were looking at it as a distant possibility. Maybe a line from the technical report written at the time sums it up best:

“The camera described in this report represents a first attempt demonstrating a photographic system which may, with improvements in technology, substantially impact the way pictures will be taken in the future.”

But in reality, we had no idea
Of interest, here is a way to Build Your Own 3D Display by Matthew Hirsch, Douglas Lanman. There was a PR piece in EE Times entitled: Technion: Compressed sensing ups data acquisition resolution, cuts cost and a paper in the SPIE entitled Compressive light-field imaging by Amit Ashok, Mark A. Neifeld with the following short abstract:
Compressive light-field imagers employ fewer photon-efficient measurements, enabling higher-resolution reconstruction than is possible using their traditional counterparts.
So without much wait, here is the rest of a long list of papers of interest this week:

I just found a thesis in (mostly) Portuguese on Compressive Sensing entitled: Compressive Sensing, Novos Paradigmas para Aquisição e Compressão de Imagens by Adriana Schulz.

I could not grab the text version of this paper but here it is: :The Benefit of Group Sparsity

Accuracy guarantees for ℓ1-recovery by Anatoli Juditsky, Fatma Kilinc Karzan, Arkadi Nemirovski.The abstract reads:
We propose new methods of recovery of sparse signals from noisy observation based on L1-minimization. They are closely related to the well-known techniques such as Lasso and Dantzig Selector. However, these estimators come with {\em efficiently verifiable guaranties of performance}. By optimizing these bounds with respect to the method parameters we are able to construct the estimators which possess better statistical properties than the commonly used ones. We also provide an oracle inequality to justify the proposed algorithms and show how the estimates can be computed using the Basis Pursuit algorithm.

Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum by Fumihito Yasuma, Tomoo Mitsunaga, Daisuke Iso, and Shree K. Nayar. The abstract reads:
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3)We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral
images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

MULTICHANNEL-COMPRESSIVE ESTIMATION OF DOUBLY SELECTIVE CHANNELS IN MIMO-OFDM SYSTEMS: EXPLOITING AND ENHANCING JOINT SPARSITY by Daniel Eiwen, Georg Taubock, Franz Hlawatsch, Holger Rauhut, and Nicolai Czink. The abstract reads:
We propose a compressive estimator of doubly selective channels within pulse-shaping multicarrier MIMO systems (including MIMOOFDM as a special case). The use of multichannel compressed sensing exploits the joint sparsity of the MIMO channel for improved
performance. We also propose a multichannel basis optimization for enhancing joint sparsity. Simulation results demonstrate significant advantages over channel-by-channel compressive estimation.
Compressed Meter Reading for Delay-sensitive and Secure Load Report in Smart Grid by Husheng Li, Rukun Mao, Lifeng Lai and Robert. C. Qiu. The abstract reads:.
It is a key task in smart grid to send the readings of smart meters to an access point (AP) in a wireless manner. The requirements of scalability, realtimeness and security make the wireless meter reading highly challenging. On assuming that the number of smart meters is large and the data burst is sparse, i.e., only a small fraction of the smart meters are reporting their power loads at the same time, the technique of compressed sensing is applied for the wireless meter reading. The distinguishing feature of the compressed meter reading is that the active smart meters are allowed to transmit simultaneously and the AP is able to distinguish the reports from different
smart meters. The data sparsity solves the problem of scalability. The simultaneous access results in uniform delays, in contrast to the possible large delay in carrier sensing multiple access
(CSMA) technique. The random sequence used in the compressed sensing enhances the privacy and integrity of the meter reading. The validity of the proposed scheme is then demonstrated by
numerical simulations.
An abstract of interest: Sparse High Dimensional Models in Economics by Jianqing Fan, Jinchi Lv, Lei Qi. The abstract reads:
This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed.

This paper studies the problem of exact localization of multiple objects with noisy data. The crux of the proposed approach consists of random illumination. Two recovery methods are analyzed: the Lasso and the One-Step Thresholding (OST). For independent random probes, it is shown that both recovery methods can localize exactly $s=\cO(m)$, up to a logarithmic factor, objects where $m$ is the number of data. Moreover, when the number of random probes is large the Lasso with random illumination has a performance guarantee for superresolution, beating the Rayleigh resolution limit. Numerical evidence confirms the predictions and indicates that the performance of the Lasso is superior to that of the OST for the proposed set-up with random illumination.
Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined ``errors-in-variables'' models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.

Two-way relay network (TWRN) was introduced to realize high-data rate transmission over the wireless frequency-selective channel. However, TWRC requires the knowledge of channel state information (CSI) not only for coherent data detection but also for the self-data removal. This is partial accomplished by training sequence-based linear channel estimation. However, conventional linear estimation techniques neglect anticipated sparsity of multipath channel. Unlike the previous methods, we propose a compressive channel estimation method which exploit the sparse structure and provide significant improvements in MSE performance when compared with traditional LSbased linear channel probing strategies. Simulation results confirm the proposed methods.
We consider the problem of learning a coefficient vector $x_0\in\reals^N$ from noisy linear observation $y=Ax_0+w\in\reals^n$. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator $\hx$. In this case, a popular approach consists in solving an $\ell_1$-penalized least squares problem known as the LASSO or Basis Pursuit DeNoising (BPDN).
For sequences of matrices $A$ of increasing dimensions, with independent gaussian entries, we prove that the normalized risk of the LASSO converges to a limit, and we obtain an explicit expression for this limit. Our result is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas.

We consider the problem of simultaneous variable selection and constant coefficient identification in high-dimensional varying coefficient models based on B-spline basis expansion. Both objectives can be considered as some type of model selection problems and we show that they can be achieved by a double shrinkage strategy. We apply the adaptive group Lasso penalty in models involving a diverging number of covariates, which can be much larger than the sample size, but we assume the number of relevant variables is smaller than the sample size via model sparsity. Such so-called ultra-high dimensional settings are especially challenging in semiparametric models as we consider here and has not been dealt with before. Under suitable conditions, we show that consistency in terms of both variable selection and constant coefficient identification can be achieved, as well as the oracle property of the constant coefficients. Even in the case that the zero and constant coefficients are known a priori, our results appear to be new in that it reduces to semivarying coefficient models (a.k.a. partially linear varying coefficient models) with a diverging number of covariates. We also theoretically demonstrate the consistency of a semiparametric BIC-type criterion in this high-dimensional context, extending several previous results. The finite sample behavior of the estimator is evaluated by some Monte Carlo studies.
GROUP SPARSITY METHODS FOR COMPRESSIVE CHANNEL ESTIMATION IN DOUBLY DISPERSIVE MULTICARRIER SYSTEMS by Daniel Eiwen, Georg Taubock, Franz Hlawatsch and Hans Georg Feichting. The abstract reads:
We propose advanced compressive estimators of doubly dispersive channels withinmulticarrier communication systems (including classical OFDM systems). The performance of compressive channel estimation has been shown to be limited by leakage components impairing the channel’s effective delay-Doppler sparsity. We demonstrate a group sparse structure of these leakage components and apply recently proposed recovery techniques for group sparse signals. We also present a basis optimization method for enhancing group sparsity. Statistical knowledge about the channel can be incorporated in the basis optimization if available. The proposed estimators outperform existing compressive estimators with respect to estimation accuracy and, in one instance, also computational complexity.

For consistency (even oracle properties) of estimation and model prediction, almost all existing methods of variable/feature selection critically depend on sparsity of models. However, for ``large $p$ and small $n$" models sparsity assumption is hard to check and particularly, when this assumption is violated, the consistency of all existing estimations is usually impossible because working models selected by existing methods such as the LASSO and the Dantzig selector are usually biased. To attack this problem, we in this paper propose adaptive post-Dantzig estimation and model prediction. Here the adaptability means that the consistency based on the newly proposed method is adaptive to non-sparsity of model, choice of shrinkage tuning parameter and dimension of predictor vector. The idea is that after a sub-model as a working model is determined by the Dantzig selector, we construct a globally unbiased sub-model by choosing suitable instrumental variables and nonparametric adjustment. The new estimation of the parameters in the sub-model can be of the asymptotic normality. The consistent estimator, together with the selected sub-model and adjusted model, improves model predictions. Simulation studies show that the new approach has the significant improvement of estimation and prediction accuracies over the Gaussian Dantzig selector and other classical methods have.

In this paper, motivated by network inference and tomography applications, we study the problem of compressive sensing for sparse signal vectors over graphs. In particular, we are interested in recovering sparse vectors representing the properties of the edges from a graph. Unlike existing compressive sensing results, the collective additive measurements we are allowed to take must follow connected paths over the underlying graph. For a sufficiently connected graph with $n$ nodes, it is shown that, using $O(k \log(n))$ path measurements, we are able to recover any $k$-sparse link vector (with no more than $k$ nonzero elements), even though the measurements have to follow the graph path constraints. We further show that the computationally efficient $\ell_1$ minimization can provide theoretical guarantees for inferring such $k$-sparse vectors with $O(k \log(n))$ path measurements from the graph.
Optimized Projection Matrix for Compressive Sensing by Jianping Xu, Yiming Pi, and Zongjie Cao. The abstract reads:
Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.
\ell_p-\ell_q penalty for sparse linear and multiple kernel multi-task learning by Alain Rakotomamonjy, Remi Flamary, Gilles Gasso, Stephane Canu. The abstract reads:
Recently, there has been a lot of interest around multi-task learning (MTL) problem with the constraints that tasks should share a common sparsity profile. Such a problem can be addressed through a regularization framework where the regularizer induces a joint-sparsity pattern between task decision functions. We follow this principled framework and focus on `p−`q (with 0 \le p \le 1 and 1 \le q \le 2) mixed-norms as sparsity inducing penalties. Our motivation for addressing such a larger class of penalty is to adapt the penalty to a problem at hand leading thus to better performances and better sparsity pattern. For solving the problem in the general multiple kernel case, we first derive a variational formulation of the `1 − `q penalty which helps up in proposing an alternate optimization algorithm. Although very simple, the latter algorithm provably converges to the global minimum of the `1 − `q penalized problem. For the linear case, we extend existing works considering accelerated proximal gradient to this penalty. Our contribution in this context is to provide an efficient scheme for computing the `1−`q proximal operator. Then, for the more general case when 0 \;t p \lt 1, we solve the resulting non-convex problem through a majorization-minimization approach. The resulting algorithm is an iterative scheme which, at each iteration, solves a weighted `1 − `q sparse MTL problem. Empirical evidences from toy dataset and real-word datasets dealing with BCI single trial EEG classification and protein subcellular localization show the benefit of the proposed approaches and algorithms.
This is a tutorial on some basic non-asymptotic methods and concepts in random matrix theory. The reader will learn several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung  from the development of geometric functional analysis in the 1970-2000's. They have applications in several elds, most notably in theoretical computer science, statistics and signal processing. A few basic applications are covered in this text, particularly for the problem of estimating covariance matrices in statistics and for validating probabilistic constructions of measurement matrices in compressed sensing. These notes are written particularly for graduate students and beginning researchers in different areas, including functional analysts, probabilists, theoretical statisticians, electrical engineers, and theoretical computer scientists.
Roman would be most grateful for your comments and corrections! Please do send them to him until December 31, 2010.

ACCELERATED BLOCK-COORDINATE RELAXATION FOR REGULARIZED OPTIMIZATION by  STEPHEN J. WRIGHT. The abstract reads:
We discuss minimization of a smooth function to which is added a regularization function that induces structure in the solution. A block-coordinate relaxation approach with proximal linearized subproblems yields convergence to stationary points, while identification of the optimal manifold (under a nondegeneracy condition) allows acceleration techniques to be applied on a reduced space. The work is motivated by experience with an algorithm for regularized logistic regression, and computational results for the algorithm on problems of this type are presented

We also found several abstracts without the full presentation or paper:

Restricted eigenvalue properties for correlated Gaussian designs by Garvesh Raskutti, Martin J.Wainwright, Bin Yu. The abstract reads:
Methods based on ℓ1-relaxation, such as basis pursuit and the Lasso, are very popular for sparse regression in high dimensions. The conditions for success of these methods are now well-understood: (1) exact recovery in the noiseless setting is possible if and only if the design matrixX satisfies the restricted nullspace property, and (2) the squared ℓ2-error of a Lasso estimate decays at the minimax optimal rate k log p n , where k is the sparsity of the p-dimensional regression problem with additive Gaussian noise, whenever the design satisfies a restricted eigenvalue condition. The key issue is thus to determine when the design matrix X satisfies these desirable properties. Thus far, there have been numerous results showing that the restricted isometry property, which implies both the restricted nullspace and eigenvalue conditions, is satisfied when all entries of X are independent and identically distributed (i.i.d.), or the rows are unitary. This paper proves directly that the restricted nullspace and eigenvalue conditions hold with high probability for quite general classes of Gaussian matrices for which the predictors may be highly dependent, and hence restricted isometry conditions can be violated with high probability. In this way, our results extend the attractive theoretical guarantees on ℓ1-relaxations to a much broader class of problems than the case of completely independent or unitary designs.
Speaker: Vivek Goyal (MIT). Abstract
The conventional emphases in data acquisition are the density and accuracy of measurements. Compressed sensing has reminded us of the importance of modeling and has brought sparsity- and compressibility-based models to the fore. Furthermore, the variety of computations that can be performed with the same data can give a broad trade-off between computational cost and performance.
This talk will review results on sparse signal support recovery and sparse signal estimation in a non-Bayesian setting. Full support recovery in the presence of noise is difficult, and the results for estimation are quantitatively pessimistic. The replica method, a non-rigorous but successful technique from statistical physics, is used for asymptotic analysis in a Bayesian setting. This can be applied to many estimators used in compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. It reduces the analysis of compressed sensing to the study of scalar estimation problems and gives results that are more encouraging for compressed sensing.

IMRT and VMAT Inverse Planning with Compressed Sensing Techniques by L Xing, L Zhu, B Meng, S Boyd. The abstract reads:

The classical Shannon‐Nyquist sampling theorem specifies that to avoid losing information when capturing a signal one must sample at least two times faster than the signal bandwidth. In many medical applications the Nyquist rate may be either impractical or too expensive to be realized practically. Compressed sensing provides a practically valuable approach for finding optimal solutions with under‐ sampled data. In this talk I will summarize our recent work on using compressed sensing for IMRT and VMAT inverse planning. We show that effective utilization of prior knowledge of the systems through compressed sensing can greatly reduce the required number of measurement samples determined by the Shannon‐Nyquist theorem and leads to significantly improved IMRT/VMAT plans.Compressed sensing has significant interactions and bearings on fields of radiation oncology and medical imaging.



Image Credit: NASA/JPL/Space Science Institute
N00161116.jpg was taken on August 14, 2010 and received on Earth August 15, 2010. The camera was pointing toward SATURN-RINGS at approximately 190,404 kilometers away, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System in 2011.

Saturday, August 21, 2010

Yves Meyer: Compressed Sensing, Quasi-crystals, Wavelets and Noiselets

Congratulations to Yves Meyer who just received the Gauss prize at the ICM. The text of the ICM states:
....More recently, he has found a surprising connection between his early work on the model sets used to construct quasicrystals — the ‘Meyer Sets’ — and ‘compressed sensing’, a technique used for acquiring and reconstructing a signal utilizing the prior knowledge that it is sparse or compressible. Based on this he has developed a new algorithm for image processing. A version of such an algorithm has been installed in the space mission Herschel of the European Space Agency (ESA), which is aimed at providing images of the oldest and coldest stars in the universe.
By coincidence, Sarah decided to read that paper recently. It is one of the few papers that is like a UFO in the current CS literature, i.e.it is using additional information (positivity of the function to be reconstructed) to make a generic statement on the sampling and the function reconstruction using l_1 techniques..

The connection to compressive sensing is deeper in my view. Yves Meyer was the person who Stephane Mallat went to to present his initial results on multiresolution and wavelets (he shows up in the acknowledgments of this seminal paper). Without a compact representation of functions we would not have grabbed the concept of sparsity which would have made it tremendously difficult to articulate the concept of compressed sensing if it had been found otherwise.
.  
In a different direction, Terry Tao has a different take on Yves Meyer's contribution while Laurent Jacques just wrote a blog entry on Noiselets (Yves Meyer is a co-author of that paper) entitled Some comments on Noiselets. Noiselets are functions that are incoherent with Haar wavelets and can be found on Justin' Romberg's website.

Finally, Laurent Duval, the host of WITS, the dictionary on wavelets, went through an anagram on the matter.

Some of Yves Meyer's books (as a writer or as an editor) can be found in this Amazon Store.

Congratulations Yves.

P.S.: By the way, I am still waiting for those compressed sensing encoding results from Herschel but last time I checked there was not a clear connection between Yves Meyer's work and the attempt on Herschel. besides being connected to generic compressed sensing. In the Herschel case, one would use a traditional random projection whereas the work of Yves specifically argues for having access to the Fourier transform of the function to be reconstructed.

Friday, August 20, 2010

The Nuit Blanche Reference store

I have set up an Amazon store that features a list of different subjects that have been covered in some shape or another on Nuit Blanche. For the books, the items in these categories are reference documents in support of these subject areas. They also include other type of products but they can all in one shape or another be connected back to some of the content mentioned here. If you want to support the blog, please consider making purchases through the Nuit Blanche Reference store. Here is the current category list and look forward to expand it:

For instance, the Hardware Tinkering section include the upcoming Nikon Coolpix S1100pj 14 MP Digital Camera,



or the Fujifilm FinePix Real 3D W3 Digital Camera, an Arduino Mega Starter Kit, netbooks and so on.

CS: Questions and Answers

In These Technologies Do Not Exist, I mentioned the possibility of performing some type of time modulation on a PMT.. A commenter, David, pointed out to me that I might be mistaken and that this technology possibly already exists:
....If this was a passive detection would it not be described as an analogue lock-in amplifier?

http://en.wikipedia.org/wiki/Lock-in_amplifier

Or if active, a heterodyne or homodyne detection scheme?

http://en.wikipedia.org/wiki/Heterodyne_detection

Our community routinely uses active non-linear detection with PMTs to measure phase and amplitude of time-modulated photon density waves in biological tissues. For example:

http://www.lrsm.upenn.edu/pmi/papers/AGYgroup/XingdeLi/YYang97OptEng36.pdf

http://dx.doi.org/10.1117/1.1331561

David
I am not sure this is the same thing, but I need to check on this further. Thanks David, whoever you are :)

A while back, on the Linedkin Compressed Sensing Group DiscussionRobert Kosut asked
Do any of you know what the largest signal sizes are that have benefited from CS? And what are is the computational time associated with that? And is any of this in papers.
Emil Sidky answered with:

The application to image reconstruction in X-ray computed tomography might be the one with the largest signal sizes. I know at least that we have reconstructed images with 10^8 -10^9 voxels from ~10^7 transmission measurements. But there are a couple of points to keep in mind:
(1) It's not clear whether or not the system matrix for CT admits exact reconstruction under ideal conditions, because we're not doing random sampling.
(2) For these large systems, we do not solve the constrained, TV-minimization problem to numerical accuracy of the computer. We typically truncate the iterative algorithm at 10-20 iterations, well short of convergence. Nevertheless, the CS-style optimization problems appear to lead to effective recon. algorithms for CT with sparsely sampled view angles.

As for recon. time: with the truncated algorithms and using GPU acceleration we have achieved decent results in 10 min. for problems of the size mentioned above.

A couple of our references:
http://arxiv.org/abs/0904.4495
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2630711/
http://arxiv.org/abs/0904.1016

Emil

While Eric Tramel answered:

Oh, also, if you consider them as a single problem, decoding of CS encoded video might be one of the largest problems, depending on video length :P

You're looking at 4-5*10^8 pixels per minute of video (and far sub HD at that).

Hao Shen just asked a new question on the LinkedIn compressed sensing group discussions.


Thank you all for asking and answering questions publicly. It will go a long way toward making CS more mainstream. The Theoretical Computer Science Q&A will become public on Monday, outstanding! 

Thursday, August 19, 2010

A small Q&A with Rick Trebino, the inventor of FROG.

Following up on previous entries on the possibility of FROG being an instance of compressed sensing (which is still a question, part 1, part 2) I asked Rick Trebino, the inventor of FROG, some questions and he kindly responded to them. I initially started with:

Igor:
By the way, I stumbled on FROG about five years ago but did not get to make the parallel with compressed sensing only recently. If you have not had time to check compressive sensing, it really is a way of acquiring and reconstructing signals with a much lower number of samples than required by the Shannon-Nyquist theorem. This is possible because the signal is known in advance to be sparse. The main difference between FROG and CS so far seems that the measurement is nonlinear in FROG and that there is no stated assumption of sparsity of the underlying signal. With that in mind here are my set of dumb questions:

1- One of the reason I mentioned that FROG might be an instance of nonlinear compressed sensing revolves around the fact that you are reconstructing a pulse, which while it might be complex, is alone in a "sea" of nothing. Is that accurate? Are the pulses that you are recovering, signals that are "sparse" in time, i.e. the smallest pulse is really a pulse surrounded by no signal ?
Rick responded with:
We don't need to assume that the pulse is alone; the sea of zeroes confirms that it is.
I then asked:
2- Have you had any instances where your reconstruction is known to fail or does not yield accurate results ? Are those instance related to the laser pulse being "long", not sparse enough ?
Yes. If the nonzero region of the trace is cut off, it often fails (but this seems reasonable, as key information is missing). And for very complex pulses in the presence of noise, it also can fail. When it does, disagreement between the measured and retrieved traces tells us this, and we know not to trust the result. This doesn't seem to be related to sparseness, however. There is, however, one interesting case perhaps related to sparseness, and that is the "ambiguity" of a pulse with well-separated frequencies, whose relative phases cannot be measured at all, even in the absence of noise; they have the same FROG traces, independent of the relative phases. Interestingly, alternative methods developed to measure such pulses also have this ambiguity.
Igor::
3- Once you get a FROG spectrogram, do you use all the data from the spectrogram to reconstruct the full signal ? have you ever toyed with the possibility of reconstructing the signals with a (random) subset of the elements of the spectrogram ?
Yes, we always use all the data. A clever fellow named Craig Siders did develop a FROG algorithm that began by using only every eighth point. Then every fourth, etc., until the last few iterations, when he used all the data. This sped up the code considerably. But today's laptops are so fast that we haven't needed it. On the other hand, it might be worth reconsidering, as faster is always better! We have also used random subsets of datapoints to do a bootstrap computation of error bars, which worked very well.
Igor:

4- If I understand correctly GRENOUILLE is an hardware implementation of FROG. Is somebody selling that hardware ? in the affirmative, who is it ?
Yes. I formed a company a few years ago (Swamp Optics, www.swampoptics.com), which has by now sold GRENOUILLEs to most ultrafast labs, and, as a result, has forced a whole generation of ultrafast scientists to use some utterly frivolous acronyms in their daily work and publications.
Rick also rightly pointed out the following:
But, in the meantime, I should mention that, at the moment and from what I understand of it (which isn't much), it seems to me that FROG may actually be the opposite of compressed sensing! The FROG trace actually contains much more data than is necessary to determine the pulse (an NxN array to determine N intensity points and N phase points). This is one its strengths, as it's easy to take all this data in a camera trace, and, more importantly, optics devices can often be difficult to set up and align, so systematic error can occur, and the redundancy in the trace serves as a check that that isn't occurring. The sea of zeroes is even more redundancy.
To what I responded with:
I realize that you are producing more data, however, your answer to question 3 seems to indicate that you can use less and that you can use a random set which would have some of the hallmarks of CS.
Thanks Rick.

Credit: NASA / JHUAPL / CIW. Earth and the Moon from much closer to the Sun. See those two bright dots? That's the double planet of Earth and the Moon. MESSENGER captured this view of its birthplace on May 6, 2010, during a search for vulcanoids, asteroids that are theorized to reside between Earth and the Sun. No vulcanoid has yet been discovered, but searches continue.

FROG, Nonlinear Compressive Sensing ? continued.

After reading the entry on whether FROG is an instance of Nonlinear Compressed Sensing ?, Rick Trebino wrote the following in the comment section:
Recent Georgia Tech (Ph.D.) grad, John Bowlan, pointed me to your site. Thanks, Igor, for the nice summary of my work (and unsolicited plug for my book)! I'd be interested in exploring any links between FROG and your field. I'll be meeting with Justin Romberg, and we'll see what emerges. Also, I'd be happy to hear from any other of your readers (rick.trebino@physics.gatech.edu). And, by the way, we're still waiting for that proof; any ideas?

To summarize, Rick has devised a reconstruction algorithm based on analog measurements implemented on the GRENOUILLE hardware. The reconstruction algorithm is pictured below.
Even though, the measurement is nonlinear, I suspect the reconstruction works because the solution is sparse (devising the shape of the smallest pusle possible). The sets on which the projections are made are not convex. The whole analysis is reminiscent of the approach of Stefano Marchesini on Ab initio Phase Retrieval Algorithms. as well as the framework of proximal splitting ( see Proximal Splitting Methods in Signal Processing by Patrick L. Combettes, and Jean-Christophe Pesquet )
and the other references mentioned earlier in FROG is an instance of Nonlinear Compressed Sensing ?.

Printfriendly