Pages

Wednesday, March 31, 2010

CS: Random Matrix Theory, Randomized Kaczmarz Solver, Streaming compressive sampling, LASSO and Dantzig Selector


Muthu is looking for a postdoc. Today we have a mostly theoretical entry ending with a real application:

We discuss applications of some concepts of Compressed Sensing in the recent work on invertibility of random matrices due to Rudelson and the author. We sketch an argument leading to the optimal bound (N^(-1/2)) on the median of the smallest singular value of an N x N matrix with random independent entries. We highlight the parts of the argument where sparsity ideas played a key role.
The attendant presentation is here. Also from Roman Vershynin here are the slides of a presentation entitled: Random matrix theory from the functional analytic perspective

The connection between the ART algorithm and the Kaczmarz algorithm made me re-read: Randomized Kaczmarz Solver for Noisy Linear Systems by Deanna Needell. The abstract reads:
The Kaczmarz method is an iterative algorithm for solving systems of linear equations Ax = b. Theoretical convergence rates for this algorithm were largely unknown until recently when work was done on a randomized version of the algorithm. It was proved that for overdetermined systems, the randomized Kaczmarz method converges with expected exponential rate, independent of the number of equations in the system. Here we analyze the case where the system Ax = b is corrupted by noise, so we consider the system Ax ≈ b + r where r is an arbitrary error vector. We prove that in this noisy version, the randomized method reaches an error threshold dependent on the matrix A with the same rate as in the error-free case. We provide examples showing our results are sharp in the general context.
Also of interest: Noisy Signal Recovery via Iterative Reweighted L1-Minimization by Deanna Needell. The abstract reads:

Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1-minimization problem, and this method is accurate even in the presence of noise. Recent a modified version of this method, reweighted ℓ1-minimization, has been suggested. Although no provable results have yet been attained, empirical studies have suggested the reweighted version outperforms the standard method. Here we analyze the reweighted ℓ1-minimization method in the noisy case, and provide provable results showing an improvement in the error bound over the standard bounds.
The attendant slides presenting this paper are here.

Compressive sampling (CS) has emerged as significant signal processing framework to acquire and reconstruct sparse signals at rates significantly below the Nyquist rate. However, most of the CS development to-date has focused on finite-length signals and representations. In this paper we discuss a streaming CS framework and greedy reconstruction algorithm, the Streaming Greedy Pursuit (SGP), to reconstruct signals with sparse frequency content. Our proposed sampling framework and the SGP are explicitly intended for streaming applications and signals of unknown length. The measurement framework we propose is designed to be causal and implementable using existing hardware architectures. Furthermore, our reconstruction algorithm provides specific computational guarantees, which makes it appropriate for realtime system implementations. Our experimental results on very long signals demonstrate the good performance of the SGP and validate our approach.

M. Salman Asif also presented On the LASSO and Dantzig Selector Equivalence at CISS'10.


Finally using random features to make an encrypting device, mmuhh...

In this paper, we propose a secure anti-counterfeiting solution based on the extraction of a unique identi er out of the random pro le of laser marks that are engraved on the surface or in the bulk of physical objects. Given today's technology, the 3D pro le of a laser mark can be considered uncloneable. Indeed, reproducing a mark would be so expensive that it thwarts any attempt to build a twin of a given mark. Actually, we rely on the resolution di erence that can be achieved between the engraving which can be seen as pretty coarse, and the much more precise reading process. The solution we propose exploits these physical features in the design of a comprehensive anti-counterfeiting system based on existing tools. Experimental results demonstrate the feasibility of using the framework for real applications.

Tuesday, March 30, 2010

CS: l_p IRLS codes and the International Conference on Computational Photography ( ICPP10 ) webcast


Angshul Majumdar sent the following:
Hi Igor

A couple of Matlab files for solving lp analysis and synthesis prior algorithms. Both are first order algorithms unlike the IRLS. To a large extent they are based on the Majorization-Minimization approach, which I learnt from the Selesnick and Figueredo's paper.

The two codes are here:

Thanks Angshul !

Ramesh tweeted that I should point to the International Conference on Computational Photography (ICCP10) webcast. I did but in case the last entry was too long here it is:


The program for the second day shows some interesting talks. The program starts at 9:15 AM Boston time (EST). To convert to your time zone.

While you are waiting for the video feed to come up you may want to read Ramesh's presentation on How to come up with new Ideas



I like his style, he is not talking about compressed sensing directly but one day he'll come on our side of the Force :)

Credit photo: NASA / JPL / panorama by Emily Lakdawalla, Opportunity's at the twin craters

Monday, March 29, 2010

Random Signs of Time

As I was watching Ingrid Daubechies' talk given at UCL (in Belgium) on Independent component analysis and functional Magnetic Resonance Imaging (where she highlighted the connection between ICA algorithms and sparsity seeking algorithms) I had to recoil on the fact that she had to make a reference to CSI (31 minutes and 30 seconds into the talk): Yes, the same CSI that I referenced in Why Compressed Sensing is NOT a CSI "Enhance" technology ... yet ! and Surely You Must Be Joking Mr. Screenwriter. But I guess it was brought up for good. However, The CSI effect really has a dark side to it, and so it important that we keep making sure that what is feasible and what is not, is clearly understood by a large part of the population that is technically educated but is probably not up on their knowledge of the convergence bounds of the latest proximal algorithms. Case in point, I just saw this visitor on my log (please note the webpage that person read).



Ever since the blog has become a little more popular, I have gotten many "opportunities" including invitations to provide my name and address because I won the British Lottery but also I have been asked to provide reviews or be the editor of some new journals focused on compressive sensing. I am of two minds on the latter offers: On the one hand, I believe that the peer review process is good to weed out the truly awful papers. On the other hand, I cannot say I have published in this area nor that it is a good use of my time. I also believe that starting a journal solely focused on compressed sensing is the beginning of a long slide toward irrelevancy. I am not tenure track and have never been hence the road to funding is, as some of you know, a little overwhelming and justifies that I am very stingy on the time thingy. Finally, the blog is destined to be inspirational. The hard work is mostly done by most of you readers.

One could say that this is an easy way out but I stand to loose a whole lot more being ridiculed in public by you people for mis-stating something you say than by anonymously rejecting your paper. Furthermore, the style of my reviews would certainly compromise my anonymity. Pierre Vandergheynst has another take in his twitter stream, he furthered his thought in the following tweet. His sentiment echoes one of my recent op-ed.

This is all the more exciting to see that some of you decided to link to this blog for your courses or your book. The recent referrals I have seen in my log include:

Thanks guys !


After having watched Pierre Vandergheynst's presentation, I inquired about the need for his team to make a toolbox available so that one could use their CS imaging chip remotely at first and perform trade studies more efficiently. Pierre tweeted back:

it will be toolbox season at EPFL as we shoot at making our research more reproducible.

Outstanding! Not only will it be more reproducible, I also believe it will give them an edge as it will provide design engineers in start-ups or large companies a way to evaluate whether this is something for them. One way of dissiminating this type of "API" (besides advertizing it on the Compressive Sensing Hardware page) is to reach out to the 360 people who have joined the Compressive Sensing Study Group on LinkedIn. Let me know if you also want to be added to the Twitter list of folks that have an interest in Compressed Sensing.

In a different direction, Dick Lipton just wrote how Fast Matrix Product was discovered:
...One day he suddenly realized, if there was a way to multiply two by two matrices with {7} multiplications rather than {8}, there would be a sub-cubic recursive method for {n} by {n}. He immediately sat down and quickly discovered his now famous formulas. At least this is the story....

I like that story, mostly because sometimes the naive way of doing things is not the most efficient. What about if there was a Fast Matrix product equivalent to MMA17 ?

The talk I mentioned at the beginning of this entry was Independent component analysis and functional Magnetic Resonance Imaging by Ingrid Daubechies . The abstract of the talk reads:
Independent Component Analysis (ICA), a method for separating a mixture of different components into its constituents, has been proposed for a variety of different applications, including functional Magnetic Resonance Imaging (fMRI) of brain processes. The presentation summarizes the findings of several years of interaction between applied mathematicians and neuroscientists, expert in fMRI, concentrating on probing ICA methods for brain fMRI. This study raised questions, informed by mathematical considerations, that are investigated using numerical simulations and specially designed fMRI experiments. The intent was not to cast doubt on the successes of ICA methods for fMRI data analysis, but rather to understand the elements that determine the methods' success; this led us to a surprising result.

Bob Sturm has a similar question in acoustics.

In other news, the ONR funded Moeness Amin and Bijan Mobasseri at Villanova with $355,000 on compressive sensing projects. From the press release, I note the mind blowing statement:
Other venues include covert communications using biologically-generated emissions from marine mammals as cover signals,
This is not Imaging With Nature anymore, but rather Communicating With Nature :)

Marta Betcke at UCL (U.K) is leading a grant on compressive sensing CT scanning for security application.

The IEEE International Conference on Computational Photography (ICCP) is taking place right now at the MIT Media Lab. They have a twitter stream and a posterous site. They also have a live webcast channel. Ramesh Raskar continues to advertize for people to join his group:

We are actively looking for Graduate Students, MEng, PostDocs and UROPs starting Spring and Fall 2010. Please see here if you are interested.


Finally, a call for Green Computing that includes an interest in Compressed Sensing.

Saturday, March 27, 2010

CS: A new question, Solving A Low-Rank Factorization Model for Matrix Completion by A Nonlinear Successive Over-Relaxation Algorithm, CS imaging

Following up on interesting question in A Puzzle of No Return? Bob Sturm has a part 2 to that question.


The matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions -- a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms.
Thanks Yin!



In a previous work we presented a compressed imaging approach that uses a row
of rotating sensors to capture indirectly polar strips of the Fourier transform of the image. Here we present further developments of this technique and present new results. The advantages of our technique, compared to other optically compressed imaging techniques, is that its optical implementation is relatively easy, it does not require complicate calibrations and that it can be implemented in near-real time.
They even quote Nuit Blanche, woohoo!

Wednesday, March 24, 2010

AMD contest entry.

Folks, the preceding entry is meant to be a response to the AMD contest on What Would You Do with 48 Cores ?

At the very least this is good exercise to shorten my explanation as to why the Donoho-Tanner phase transition is important in less than 500 characters :)

What Would You Do With 48 Cores ?


Map a new frontier


Our everyday life is now the witness of a constant stream of large amount of data. (stocks market, medical diagnosis, social networks, geolocalization….), a situation that is new to our collective wisdom and is becoming overwhelming to our processing power. This new world of increasing complexity requires us to devise ever increasing processing hardware in order to make sense of this constant « deluge » of data. A recent subsampling technique, devised on theoretical ground only 6 years ago, called compressed sensing aims to do just that by changing the way we acquire data in the first place. I propose to use 48 cores to map the phase transition between what is and what is not feasible by this new kind of compressed sensing hardware.

I write a small blog on compressed sensing, a topic recently featured in Wired Magazine. In short, Compressed Sensing enables a new type of signal processing based on the often verified assumption that the underlying signal of interest (an image, a movie …) is sparse in some fashion or another. In this new field of investigation, David Donoho and Jared Tanner computed the following groundbreaking phase transition effect (see abstract of this article):

We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold... These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now-ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity tradeoff curve in undersampling theorems.... We conducted an extensive computational experiment ... to test the hypothesis that these phase transitions are universal across a range of underlying matrix ensembles. We ran millions of linear programs ...; to the naked eye, the empirical phase transitions do not depend on the ensemble, and they agree extremely well with the asymptotic theory .....


Compressed sensing implementations in hardware sometimes require a complete shift in the way we think of sensors. For instance, one could conceivably use Nature as a means of imaging or multiplexing data in a framework amenable to compressed sensing. The fact is we are just at the beginning of a new revolution in sensors development.

For this reason, we are interested in using a 48 core system to map the phase transition for specific hardware implementation or random systems as featured by natural systems. The computation of this transition is intrinsically parallel. While initially the system will be run locally, we could expand its use to the rest of the compressed sensing community through a simple GUI.


Tuesday, March 23, 2010

CS: A long entry


Craig Stuntz asks What is Homomorphic Encryption, and Why Should I Care? We have seen homomorphic transformation before and how it parallels compressed sensing.

Bob Sturm recently joined the University of Aalborg in Denmark and he and others in his group are writing a blog on issues related to dictionary building in sound. The blog is here.

For some odd reasons, I have not kept track of the presentations made by Allen Yang in the past year, here is an interesting list:
  • CVPR Tutorial, 2009: Sparse Representation and Its Applications in Computer Vision.
    • Introduction: Face Recognition [slides]
    • Comopressive Sensing Theory [slides]
    • Applications: Distributed Sensing and Perception [slides]
    • Robust PCA [slides]
  • Spring, 2009: High-Dimensional Multi-Model Estimation -- A Mini Course. [slides]
  • Spring, 2008: Compressed Sensing Meets Machine Learning -- A Mini Course. [lecture 1, lecture 2, lecture 3]

The Restricted Isometry Constants (RIC) of a matrix A measures how close to an isometry is the action of A on vectors with few nonzero entries, measured in the l2 norm. Specifically, the upper and lower RIC of a matrix A of size n × N is the maximum and the minimum deviation from unity (one) of the largest and smallest, respectively, square of singular values of all (N,k) matrices formed by taking k columns from A. Calculation of the RIC is intractable for most matrices due to its combinatorial nature; however, many random matrices typically have bounded RIC in some range of problem sizes (k, n,N). We provide the best known bound on the RIC for Gaussian matrices, which is also the smallest known bound on the RIC for any large rectangular matrix. Improvements over prior bounds are achieved by exploiting similarity of singular values for matrices which share a substantial number of columns.

The site where one can input \delta and\rho to get the upper and lower bound on the restricted isometry constant is at: http://ecos.maths.ed.ac.uk/ric_bounds.shtml. Also of related interest we have:

A generic tool for analyzing sparse approximation algorithms is the restricted isometry property (RIP) introduced by Candes and Tao. For qualitative comparison of sufficient conditions derived from an RIP analysis, the support size of the RIP constants is generally reduced as much as possible with the goal of achieving a support size of twice the sparsity of the target signal. Using a quantitative comparison via phase transitions for Gaussian measurement matrices, three examples from the literature of such support size reduction are investigated. In each case, utilizing a larger support size for the RIP constants results in a sufficient condition for exact sparse recovery satis fied, with high probability, by a signi ficantly larger subset of Gaussian matrices.
My webcrawler also found the following papers on the interweb: The slides are for a paper I mentioned before Turbo Reconstruction of Structured Sparse Signals by Phil Schniter. The abstract reads:
This paper considers the reconstruction of structured-sparse signals from noisy linear observations. In particular, the support of the signal coefficients is parameterized by hidden binary pattern, and a structured probabilistic prior (e.g., Markov random chain/field/tree) is assumed on the pattern. Exact inference is discussed and an approximate inference scheme, based on loopy belief propagation (BP), is proposed. The proposed scheme iterates between exploitation of the observation-structure and exploitation of the pattern-structure, and is closely related to noncoherent turbo equalization, as used in digital communication receivers. An algorithm that exploits the observation structure is then detailed based on approximate message passing ideas. The application of EXIT charts is discussed, and empirical phase transition plots are calculated for Markov-chain structured sparsity

Here a paper that reminded me of the presentation of David Field at MIA'09 ( specifically this paper: Chandler DM, and Field DJ. (2007). "Estimates of the Information Content and Dimensionality of Natural Scenes From Proximity Distributions." Journal of the Optical Society of America A, Vol. 24, Issue 4, pp. 922-941. [PDF]): Optimized intrinsic dimension estimation using nearest neighbor graphs by Kumar Sricharan, Raviv Raich, Alfred O. Hero III. The abstract reads:
We develop an approach to intrinsic dimension estimation based on k-nearest neighbor (kNN) distances. The dimension estimator is derived using a general theory on functionals of kNN density estimates. This enables us to predict the performance of the dimension estimation algorithm. In addition, it allows for optimization of free parameters in the algorithm. We validate our theory through simulations and compare our estimator to previous kNN based dimensionality estimation approaches.

Regularization is a principled way to control model complexity, prevent overfitting, and incorporate ancillary information into the learning process. As a convex relaxation of ℓ0-norm, ℓ1-norm regularization is popular for learning in high-dimensional spaces, where a fundamental assumption is the sparsity of model parameters. However, model sparsity can be restrictive and not necessarily the most appropriate assumption in many problem domains. In this paper, we relax the sparsity assumption to compressibility and propose learning compressible models: a compression operation can be included into ℓ1-regularization and thus model parameters are compressed before being penalized. We concentrate on the design of different model compression transforms, which can encode various assumptions on model parameters, e.g., local smoothness, frequency-domain energy compaction, and
correlation. Use of a compression transform inside the ℓ1 penalty term provides an opportunity to include information from domain knowledge, coding theories, unlabeled data, etc. We conduct extensive experiments on brain-computer interface, handwritten character recognition, and text classification. Empirical results show significant improvements in prediction performance by learning compressible models instead of sparse models. We also analyze the model fitting and learned model coefficients under different compressibility assumptions, which demonstrate the advantages of learning compressible models instead of sparse models.

Compressed Detection via Manifold Learning by Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park. The introduction of the paper reads:
In many imaging applications such as Computed Tomography (CT) in medical imaging and Synthetic Aperture Radar (SAR) imaging, the collected raw data, residing in RM, of the receiver or detector can be modeled as data lying in the Fourier space of the target reflectivity function. The magnitude of the reflectivity is considered as the image of the target, and we are often interested in detecting specific features in the image, e.g. tumor in medical imaging and military weapons in SAR imaging. A natural way to achieve this goal is to form an image in RN, where N > M, and detect the feature in the spatial domain. Recent development of a theory in [1] states that under certain conditions, random linear projections, © : RN 7! RL, guarantees, with high probability, that all pairwise Euclidean and geodesic distances between points on M½ RN are well preserved under the mapping This made us wonder if the geodesic distances of the original image, RN, and the corresponding raw data, RM, in SAR imaging were also reasonably preserved. Motivated by satisfactory results of simple tests to check this, which will be discussed in more detail later, we tried to detect features directly from the lower dimensional, or compressed, SAR raw data, without involving any image reconstruction step. In this report, manifold learning techniques are applied to reduce the dimension of the raw data, and is followed by a detection step to achieve our goal. The theoretical framework will be discussed later in more detail. To our best knowledge, there has not yet been a successful image detection or classification algorithm that takes advantage of this framework and works directly on the domain where the raw data resides in. Since existing algorithms work in the spatial domain, the algorithms first have to transform the raw data to the spatial domain. This transformation step could be time consuming, and in general results in loss of information. Also, working with M dimensional data will be computationally less demanding. Thus, it is interesting to look at the problem of working directly on the raw data. The rest of the report is structured as follows. First, technical background such as the theoretical framework mentioned above, the Laplacian Eigenmaps as a method of dimensionality reduction, and semi-supervised learning algorithm will be discussed. Carrying on, experimental results will be presented. Finally, we will conclude this report by giving discussion on the challenges and future works.



Image Credit: NASA/JPL/Space Science Institute, Titan as seen on March 16, 2010 from Cassini.

Sunday, March 21, 2010

A different look on Compressed Sensing

The Wired piece on Compressed Sensing has generated a different kind of interest in Compressed Sensing:
For those folks and others, the following might be of interest:

Saturday, March 20, 2010

CS: ICASSP CS day, related papers, a thesis and a video.


Here are probably the last blog entries on ICASSP. Thursday was mostly about Compressed Sensing and I gained some insight from these blog entries:
I learned things. Thank you Eric and Gonzalo for covering ICASSP. I also found some related papers to these blog entries, here they are (see below). There is a also video at the end of this entry, so don't skip it thinking it is just a series of papers.

Empirical Quantization for Sparse Sampling Systems by Michael A. Lexa. The abstract reads;
We propose a quantization design technique (estimator) suitable for new compressed sensing sampling systems whose ultimate goal is classification or detection. The design is based on empirical divergence maximization, an approach akin to the well-known technique of empirical risk minimization. We show that the estimator’s rate of convergence to the “best in class” estimate can be as fast as n^−1, where n equals the number of training samples.
Two papers that look similar:
Concentration of Measure for Block Diagonal Measurement Matrices by Michael Wakin, Jae Young Park, Han Lun Yap, and Christopher J. Rozell. The abstract reads:
Concentration of measure inequalities are at the heart of much theoretical analysis of randomized compressive operators. Though commonly studied for dense matrices, in this paper we derive a concentration of measure bound for block diagonal matrices where the nonzero entries along the main diagonal blocks are i.i.d. subgaussian random variables. Our main result states that the concentration exponent, in the best case, scales as that for a fully dense matrix. We also identify the role that the energy distribution of the signal plays in distinguishing the best case from the worst. We illustrate these phenomena with a series of experiments.
and Concentration of Measure for Block Diagonal Matrices with Repeated Blocks by Christopher J. Rozell, Han Lun Yap, Jae Young Park, Michael Wakin. The abstract:

The theoretical analysis of randomized compressive operators often relies on the existence of a concentration of measure inequality for the operator of interest. Though commonly studied for unstructured, dense matrices, matrices with more structure are often of interest because they model constraints on the sensing system or allow more efficient system implementations. In this paper we derive a concentration of measure bound for block diagonal matrices where the nonzero entries along the main diagonal are a single repeated block of i.i.d. Gaussian random variables. Our main result states that the concentration exponent, in the best case, scales as that for a fully dense matrix. We also identify the role that the signal diversity plays in distinguishing the best and worst cases. Finally, we illustrate these phenomena with a series of experiments.
and a thesis entitled: New Sampling and Detection Approaches for Compressed Sensing and their Application to Ultra Wideband Communications by Zhongmin Wang. The abstract reads:
Compressed sensing (CS) provides an efficient way to acquire and reconstruct sparse signals from a limited number of linear projection measurements leading to sub-Nyquist sampling rates. The advantages of compressed sensing include simpler hardware design, faster acquisition time, and less power consumption. In this thesis, several important applications of compressed sensing are addressed and better performance than that of existing solutions is obtained by exploiting the theory of compressed sensing. Firstly, we focus on designing efficient sampling methods for image acquisition based on CS. A key to the success of CS is the design of the measurement ensemble. A novel variable density sampling strategy is designed, where the a priori information of the statistical distributions that natural images exhibit in the wavelet domain is exploited. The proposed variable density sampling has the following advantages: 1) the generation of the measurement ensemble is computationally efficient and requires less memory; 2) the necessary number of measurements for image reconstruction is reduced; 3) the proposed sampling method can be applied to several transform domains and leads to simple implementations. The application of our proposed method to magnetic resonance imaging (MRI) is also provided in this thesis. Secondly, we address the detection of sparse signals within the CS domain. A new family of detectors called subspace compressive detectors are developed for the detection of sparse signals based on the theory of compressed sensing. The proposed detectors reduce the number of measurements needed for a given detection performance by exploiting the fact that the sparse signal resides in a low dimension subspace. By designing random projection operators tailored to the subspace where the signal-of-interest lies, the signal energy can be captured more efficiently leading to better detection performance. The information of the signal subspace can be learned from compressive measurements of training signals and the detectors are adaptive to the signal structure. Within the compressed sensing framework, it is shown that very limited random measurements of training signals can suffice to provide valuable information of the signal subspace for detection purposes. The performance of the proposed subspace compressive detectors is analyzed and implementation issues including the waveform quantization are discussed. Subspace compressive detection under narrowband interference is also considered in this thesis.
In the last part of this dissertation, the theory of compressed sensing is exploited in the design of a new type of suboptimal impulse ultra-wideband (I-UWB) receivers where only sub-Nyquist sampling of the received UWB signal is required. However, the proposed I-UWB receivers have simple hardware implementations and, at the same time, shares the flexibility in data processing with full-resolution digital receivers based on Nqyuist sampling. An improved symbol detection method is proposed for I-UWB communications by exploiting the sparsity of the received UWB signals, where the sparsity is mainly due to the multipath diversity introduced by I-UWB channels. A compressive pilot assisted time-hopping spread-spectrum signaling is introduced and performance analysis of the proposed receivers is provided. Compared with other suboptimal I-UWB receivers, satisfactory detection performance is achieved with simple hardware implementation.

Finally, we have a video of Andrea Montanari on Iterative Algorithms. The abstract of the talk:
The problem of estimating a high dimensional vector from a set of linear observations arises in a number of engineering disciplines. It becomes particularly challenging when the underlying signal has some non-linear structure that needs to be exploited. I will present a new class of iterative algorithms inspired by probabilistic graphical models ideas, that appear to be asymptotically optimal in specific contexts. I will discuss in particular the application to compressed sensing problems. [Joint work with David L. Donoho and Arian Maleki]

Credit: NASA / JPL / SSI / color composite by Gordan Ugarkovic, Titan's ring via the planetary society blog.

Friday, March 19, 2010

CS: The other long entry of the week. DT Phase transition, Finding pulars in a haystack.


Echoing these entries on Precise Undersampling Theorems and the Donoho-Tanner border crossing, Jared Tanner mentioned to me that he has made available forms for these phase transitions and the bounds on restricted isometry constants available on the Compressed Sensing group at University of Edinburgh. From the Restricted Isometry Constants page, the description starts with:
Many of the compressed sensing recovery algorithms (decoders) are guaranteed to recover the sparsest solution to an underdetermined system of equations y = Ax provided the matrix A (encoder) acts sufficiently close to an isometry on sparse vectors. Let
χN(k) := {x ∈ RN : ||x||l0 \le k} ,
where ||x||l0 counts the number of nonzeros in x.

Let A be of size n × N with n \lt N. The Lower and Upper Restricted Isometry Constants (RICs) of A are defined as:
L(k, n, N; A) := minc \ge 0 subject to (1 − c) ||x||22 \le ||Ax||22 for all x ∈ χN(k)
U(k, n, N; A) := minc \ge 0 subject to (1 + c) ||x||22 \ge ||Ax||22 for all x ∈ χN(k)
Gaussian matrices have the smallest known bound, with the tightest current bound provided in Compressed Sensing: How sharp is the RIP?, by Jeffrey D. Blanchard, Coralia Cartis, and Jared Tanner.

Update: Improved bounds for Gaussian matrices were recently published in Improved Restricted Isometry Constant Bounds for Gaussian Matrices by Bubacarr Bah and Jared Tanner. The below webforms will be updated to include these improved bounds soon.

Let ρn := k/n and δn := n/N be the compressed sensing over and under sampling ratios, respectively. Fix ε \gt 0, as n → ∞ with ρn → ρ ∈ (0,1) and δn → δ ∈ (0,1).

Prob(L(k, n, N; A) \lt L(δ, ρ) + ε) → 1 and Prob(U(k, n, N; A) \lt U(δ, ρ) + ε) → 1

Below are plots of these bounds, and functions which will calculate these bound for your choice of ρ and δ.

and Phase Transitions of the Regular Polytopes and Cone, the description starts with:

Some sparse approximation questions can be modelled exactly through convex polytopes. When this is possible, their analysis can provide Precise Undersampling Theorems (by Donoho and Tanner). These results are cast in terms of the:

Oversampling ratio: δ =n/N,

Undersampling ratio: ρ =k/n

where k is the sparsity of the vector, n is the number of measurements and N is the length of the vector measured.

This form interpolates the values of the strong and weak phase transitions of the orthant and three regular polytopes: the simplex, crosspolytope, and hypercube as well as the orthant. For ρ below a strong phase transition, the phenomenon is true for all k-sparse vectors, and below a weak phase transition, the phenomenon is true for most k-sparse vectors. The crosspolytope models l1-regularization, the simplex models l1-regularization with nonnegativity (or other sparsity constraints) and uniqueness of nonnegative vectors when the average of the signal is also measured, the orthant models uniqueness of nonnegative vectors from mean-zero measurement matrices, and the hypercube models bound constrainsts where k is the number of entries in the vector which are not at the bounds.
Once you click on these pages, you have the ability to input your parameters and estimate the bounds for the RIC or the phase transitions. Jared has also made available some mat file for the transition page. Thank you Jared .

In a different direction, Gonzalo Vazquez Vilar's SpectralHole, a blog focused on Cognitive Radio has a new entry on ICASSP Game Theory.

While reading Andrew's blog, I came across the following description on how to find a pulsar. It looks like there is a lot of multiplexing and de-multiplexing operations in this process. The amount of data generated by the survey is so large that it takes a long time to find interesting pulsars drowning in 134 TB of data. So the question becomes, could a series of random projections of these signals and a smashed filter approach speed up this quest tremendously ? or could a sparse FFT help in reducing the load of performing these giant FFTs in the first place ? Time will tell.

This week, I have been accumulating a series of papers related to Compressive sensing in some shape or another, here we go:

Universal Sparse Modeling by Ignacio Ramirez and Guillermo Sapiro. The abstract reads:
Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. In this work, we use tools from information theory, and in particular universal coding theory, to propose a framework for designing sparsity regularization terms which have several theoretical and practical advantages when compared to the more standard l0 or l1 ones, and which lead to improved coding performance and accuracy in reconstruction and classification tasks. We also report on further improvements obtained by imposing low mutual coherence and Gram matrix norm on the corresponding learned dictionaries. The presentation of the framework and theoretical foundations is complemented with examples in image denoising and classification.



Low Rate Sampling of Pulse Streams with Application to Ultrasound Imaging by Ronen Tur, Yonina C. Eldar, Zvi Friedman. The abstract reads:
Signals comprised of a stream of short pulses appear in many applications including bio-imaging, radar, and ultrawideband communication. Recently, a new framework, referred to as finite rate of innovation, has paved the way to low rate sampling of such pulses by exploiting the fact that only a small number of parameters per unit time are needed to fully describe these signals. Unfortunately, for high rates of innovation, existing approaches are numerically unstable. In this paper we propose a general sampling approach which leads to stable recovery even in the presence of many pulses. We begin by deriving a condition on the sampling kernel which allows perfect reconstruction of periodic streams of pulses from a minimal number of samples. This extends previous work which assumes that the sampling kernel is an ideal low-pass filter. A compactly supported class of filters, satisfying the mathematical condition, is then introduced, leading to a sampling framework based on compactly supported kernels. We then extend our periodic solution to finite and infinite streams, and show that our method is numerically stable even for a large number of pulses per unit time. High noise robustness is demonstrated as well when the time delays are sufficiently separated. Finally, we apply our results to ultrasound imaging data, and show that our techniques result in substantial rate reduction with respect to traditional ultrasound sampling schemes.




Fishing in Poisson streams: focusing on the whales, ignoring the minnows by Maxim Raginsky, Sina Jafarpour, Rebecca Willett, Robert Calderbank. The abstract reads:
This paper describes a low-complexity approach for reconstructing average packet arrival rates and instantaneous packet counts at a router in a communication network, where the arrivals of packets in each flow follow a Poisson process. Assuming that the rate vector of this Poisson process is sparse or approximately sparse, the goal is to maintain a compressed summary of the process sample paths using a small number of counters, such that at any time it is possible to reconstruct both the total number of packets in each flow and the underlying rate vector. We show that these tasks can be accomplished efficiently and accurately using compressed sensing with expander graphs. In particular, the compressive counts are a linear transformation of the underlying counting process by the adjacency matrix of an unbalanced expander. Such a matrix is binary and sparse, which allows for efficient incrementing when new packets arrive. We describe, analyze, and compare two methods that can be used to estimate both the current vector of total packet counts and the underlying vector of arrival rates.

Robust video denoising using low rank matrix completion by Hui Ji, Chaoqiang Liu, Zuowei Shen and Yuhong Xu. The abstract reads:
Most existing video denoising algorithms assume a single statistical model of image noise, e.g. additive Gaussian white noise, which often is violated in practice. In this paper, we present a new patch-based video denoising algorithm capable of removing serious mixed noise from the video data. By grouping similar patches in both spatial and temporal domain, we formulate the problem of removing mixed noise as a low-rank matrix completion problem, which leads to a denoising scheme without strong assumptions on the statistical properties of noise. The resulting nuclear norm related minimization problem can be efficiently solved by many recent developed methods. The robustness and effectiveness of our proposed denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and our proposed approach compares favorably against a few state-of-art denoising algorithms.

An Efficient Numerical Method for General Lp Regularization in Fluorescence Molecular Tomography by Jean-Charles Baritaux, Kai Hassler, Michael Unser. The abstract reads:
Reconstruction algorithms for fluorescence tomography have to address two crucial issues : (i) the ill-posedness of the reconstruction problem, (ii) the large scale of numerical problems arising from imaging of three dimensional samples. Our contribution is the design and implementation of a reconstruction algorithm that incorporates general Lp regularization (p \ge 1). The originality of this work lies in the application of general Lp constraints to fluorescence tomography, combined with an efficient matrix-free strategy that enables the algorithm to deal with large reconstruction problems at reduced memory and computational costs.

In the experimental part, we specialize the application of the algorithm to the case of sparsity promoting constraints (L1). We validate the adequacy of L1 regularization for the investigation of phenomena that are well described by a sparse model, using data acquired during phantom experiments.

Energy Efficient Sampling for Event Detection in Wireless Sensor Networks by Zainul Charbiwala, Younghun Kim, Sadaf Zahedi, Jonathan Friedman, Mani B. Srivastava. The abstract reads:
Compressive Sensing (CS) is a recently developed mechanism that allows signal acquisition and compression to be performed in one inexpensive step so that the sampling process itself produces a compressed version of the signal. This significantly improves systemic energy efficiency because the average sampling rate can be considerably reduced and explicit compression eliminated. In this paper, we introduce a modifi cation to the canonical CS recovery technique that enables even higher gains for event detection applications. We show a practical implementation of this compressive detection with energy constrained wireless sensor nodes and quantify the gains accrued through simulation and experimentation.


Scalable Tensor Factorizations with Missing Data by Evrim Acar, Tamara G. Kolda, Daniel M. Dunlavy, and Morten Mørup. The abstract reads:

The problem of missing data is ubiquitous in domains such as biomedical signal processing, network traffic analysis, bibliometrics, social network analysis, chemometrics, computer vision, and communication networks|all domains in which data collection is subject to occasional errors. Moreover, these data sets can be quite large and have more than two axes of variation, e.g., sender, receiver, time. Many applications in those domains aim to capture the underlying latent structure of the data; in other words, they need to factorize data sets with missing entries. If we cannot address the problem of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP), and formulate the CP model as a weighted least squares problem that models only the known entries. We develop an algorithm called CP-WOPT (CP Weighted OPTimization) using a rst-order optimization approach to solve the weighted least squares problem. Based on extensive numerical experiments, our algorithm is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is signi cantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram) application where missing data is frequently encountered due to disconnections of electrodes.

A video of Tammy Kolda on Scalable Tensor Factorizations with Incomplete Data can be found here ( or downloaded here (197 MB) (the videos are also here)


Credit: ESA / DLR / FU Berlin (G. Neukum) / Emily Lakdawalla, the Far side of Phobos taken this week.

Thursday, March 18, 2010

CS: A Cheap Structured Illumination Capability for Compressive Imaging

If you recall the reason one could not do a single pixel camera was because of cost for the DMD and a driver of the TI DMD. It looks like this situation is over. You can now buy a TI DLP and a board that drives it for less than $500.

Of interest in the recent DLP Pico projector version 2 are the following new features (my emphasis):

# Direct Connection to a PC *New*

* Connects to a DVI port for out-of-box operation

# Sync Signal Output *New*

* Connects to a sensor (e.g. camera) to enable structured lighting applications

# Selectable DMD Pattern Timing *New*

* Enables 50Hz, 60Hz video frame rates & 120Hz, 180Hz, 240Hz, 480Hz, 1200Hz, 1440Hz, 2400Hz DMD pattern rates

While one can use this hardware for making a single pixel camera, one can also use as a way to perform calibration. Here are some videos on how it is to use it out of the box with the BeagleBoard (the Pico BeagleBoard site is here)









[Disclosure: I have no ties with any of the vendors mentioned below]

CS: ICASSP Day 3

Two blogs covered ICASSP today:
The ICASSP Beagleboard workshop mentioned earlier has a wiki page.
Thanks Gonzalo and Eric !

While we are on the subject of cognitive radios and compressed sensing, George Atia made a presentation on "Group Testing: An Information Theoretic Perspective and a Spectrum Violation Corrective" three days ago.

Wednesday, March 17, 2010

Tuesday, March 16, 2010

CS: The long entry of the week.


I found the comments of the CSI entry and specifically this comment on friendfeed, fully justifying the time I spent on it after all.

Ming-Jun Lai and Simon Foucart made a major update of a paper I had already mentioned before entitled Sparse recovery with pre-Gaussian random matrices. The new abstract reads:

For an $m \times N$ underdetermined system of linear equations with independent pre-Gaussian random coefficients satisfying simple moment conditions, it is proved that the $s$-sparse solutions of the system can be found by $\ell_1$-minimization under the optimal condition $m \ge c \, s \, \ln(e N /s)$. The main ingredient of the proof is a variation of a classical Restricted Isometry Property, where the inner norm becomes the $\ell_1$-norm and where the outer norm depends on the probability distributions.
Simon tells me that they "introduce a new RIP that provides an advantage over the traditional one for the pre-Gaussian measurement matrices (this is to be compared with the approach in Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling by Adamczak, Litvak, Pajor, Tomczak Jaegermann."

Thanks Simon.

From one of the author, we also have: The null space property for sparse recovery from multiple measurement vectors by Ming-Jun Lai and Louis Yang Liu. The abstract reads:

We prove a null space property for the uniqueness of the sparse solution vectors recovered from a minimization in `q quasi-norm subject to multiple systems of linear equations, where q 2 (0; 1]. Furthermore, we show that the null space property for the setting of the sparse solution vectors for multiple linear systems is equivalent to the null space property for the standard minimization in l_q quasi-norm subject to one linear system. This answers the questions raised in [Foucart and Gribonval'09, [15]].

I also went through the web and found the following paper to be presented at ICASSP: Sensitivity to Basis Mismatch in Compressed Sensing by Yuejie Chi, Ali Pezeshki, Louis Scharf and Robert Calderbank. The abstract reads:
Compressed sensing theory suggests that successful inversion of an image of the physical world from its modal parameters can be achieved at measurement dimensions far lower than the image dimension, provided that the image is sparse in an a priori known basis. The assumed basis for sparsity typically corresponds to a gridding of the parameter space, e.g., an DFT grid in spectrum analysis. However, in reality no physical field is sparse in the DFT basis or in an a priori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and there is always mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing (basis pursuit to be exact) to mismatch between the assumed and the actual sparsity bases. Our mathematical analysis and numerical examples show that the performance of basis pursuit degrades considerably in the presence of basis mismatch.

In Cognitive Radio scenarios channelization information from primary network may be available to the spectral monitor. Under this assumption we propose a spectral estimation algorithm from compressed measurements of a multichannel wideband signal. The analysis of the Cramer-Rao Lower Bound (CRLB) for this estimation problem shows the importance of detecting the underlaying sparsity pattern of the signal. To this end we describe a Bayesian based iterative algorithm that discovers the set of active signals conforming the band and simultaneously reconstructs the spectrum. This iterative spectral estimator is shown to perform close to a Genie- Aided CRLB that includes full knowledge about the sparsity pattern of the channels.

Bayesian Compressed SensingUsign Generalized Cauchy Priors by Rafael E. Carrillo, Tuncer C. Aysal and Kenneth E. Barner. The abstract reads:
Compressed sensing shows that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Noting that sparse signals can be well modeled by algebraic-tailed impulsive distributions, in this paper, we formulate the sparse recovery problem in a Bayesian framework using algebraic-tailed priors from the generalized Cauchy distribution (GCD) family for the signal coefficients. We develop an iterative reconstruction algorithm from this Bayesian formulation. Simulation results show that the proposed method requires fewer samples than most existing reconstruction methods to recover sparse signals, thereby validating the use of GCD priors for the sparse reconstruction problem.
Robust Sampling and Reconstruction Methods for Sparse Signals In Presence of Impulsive Noise
Rafael Carrillo, Kenneth Barner and Tuncer Can Aysal. The abstract reads:
Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on L1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments.

The compatibility of unsynchronized interleaved uniform sampling with Sigma-Delta analog-to-digital conversion is investigated. Let f be a bandlimited signal that is sampled on a collection of N interleaved grids {kT + Tn}k2Z with offsets {Tn}N n=1 [0, T]. If the offsets Tn are chosen independently and uniformly at random from [0, T] and if the sample values of f are quantized with a first order Sigma-Delta algorithm, then with high probability the quantization error |f(t) − e f(t)| is at most of order N−1 logN.
While at the WAM seminar at Harvard, there is a talk on compressed sensing today:
Tuesday, March 16
3:00pm
Surya Ganguli, UCSF: Fisher information, compressed sensing, and the origins of memory traces in dynamical systems.
WhenTue, March 16, 3pm – 4pm
DescriptionFisher information, compressed sensing, and the origins of memory traces in dynamical systems Critical cognitive phenomena, such as planning and decision making, rely on the ability of the brain to hold information in working memory. Many proposals exist for the maintenance of such memories in persistent activity that arises from stable fixed point attractors in the dynamics of recurrent neural networks. However such fixed points are incapable of storing temporal sequences of recent events. An alternate, and less explored paradigm, is the storage of arbitrary temporal input sequences in the transient responses of a neural network. This paradigm raises a host of important questions. Are there any fundamental limits on the duration of such transient memory traces? How do these limits depend on the size of the network? What patterns of neural connectivity yield good performance on generic working memory tasks? How do these traces degrade in the presence of noise? In a more general context, to what extent can any high dimensional, dissipative dynamical system store past input perturbations in its instantaneous dynamical state? We combine information theory with dynamical systems theory to give precise answers to these questions for the class of all linear, and some nonlinear, neuronal networks. We uncover an important role for a special class of networks, known as nonnormal networks which possess a hidden feed-forward structure that is crucial for the maintenance of robust memory traces. Furthermore, we employ, and extend, recent results from the field of compressed sensing to show how recurrent neuronal processing can extend the duration of memory traces for sparse temporal sequences. Our results apply to general dynamical systems, and we discuss an application to memory in fluid mechanics.
Of interest is an abstract paper by this same author:

One of the most exciting advances in signal processing is the field of compressed sensing (CS) [1]. In CS, sparse high-dimensional stimuli are represented by lower dimensional dense measurements, which are linear mixtures of the stimuli. CS shows that by using the computationally tractable L1 norm as a sparse prior, the high-dimensional stimuli (for example, human speech, natural movies, FMRI data) can be fully reconstructed from the compressed measurements. In this work, we have extended CS theory and applied it to reveal new fundamental properties of
neural learning and memory in the biologically relevant scenario of sparse coding.
That's all for today, folks.

Credit: NASA, Hirise, An avalanche on Mars via the Badastronomy blog.