Pages

Tuesday, November 20, 2007

Compressed Sensing: Sensor Networks, Learning Compressed Sensing by Uncertain Component Analysis, Sparsity or Positivity ?, New CVX build

Two new items in the Rice Repository on Sensor Networks:

Shuchin Aeron, Manqi Zhao, and Venkatesh Saligrama, Sensing capacity of sensor networks: Fundamental tradeoffs of SNR, sparsity, and sensing diversity.

The abstract reads:

A fundamental question in sensor networks is to determine the sensing capacity – the minimum number of sensors necessary to monitor a given region to a desired degree of fidelity based on noisy sensor measurements. In the context of the so called compressed sensing problem sensing capacity provides bounds on the maximum number of signal components that can be identified per sensor under noisy measurements. In this paper we show that sensing capacity is a function of SNR, signal sparsity—the inherent complexity/dimensionality of the underlying signal/information space and its frequency of occurrence and sensing diversity – the number of modalities of operation of each sensor. We derive fundamental tradeoffs between SNR, sparsity, diversity and capacity. We show that the capacity is a monotonic function of SNR and diversity. A surprising result is that as sparsity approaches zero so does the sensing capacity irrespective of diversity. This implies for instance that to reliably monitor a small number of targets in a given region requires disproportionately large number of sensors.


Shuchin Aeron, Manqi Zhao, and Venkatesh Saligrama, On sensing capacity of sensor networks for the class of linear observation, fixed SNR models.
The abstract reads:
In this paper we address the problem of finding the sensing capacity of sensor networks for a class of linear observation models and a fixed SNR regime. Sensing capacity is defined as the maximum number of signal dimensions reliably identified per sensor observation. In this context sparsity of the phenomena is a key feature that determines sensing capacity. Precluding the SNR of the environment the effect of sparsity on the number of measurements required for accurate reconstruction of a sparse phenomena has been widely dealt with under compressed sensing. Nevertheless the development there was motivated from an algorithmic perspective. In this paper our aim is to derive these bounds in an information theoretic set-up and thus provide algorithm independent conditions for reliable reconstruction of sparse signals. In this direction we first generalize the Fano’s inequality and provide lower bounds to the probability of error in reconstruction subject to an arbitrary distortion criteria. Using these lower bounds to the probability of error, we derive upper bounds to sensing capacity and show that for fixed SNR regime sensing capacity goes down to zero as sparsity goes down to zero. This means that disproportionately more sensors are required to monitor very sparse events. We derive lower bounds to sensing capacity (achievable) via deriving upper bounds to the probability of error via adaptation to a max-likelihood detection set-up under a given distortion criteria. These lower bounds to sensing capacity exhibit similar behavior though there is an SNR gap in the upper and lower bounds. Subsequently, we show the effect of correlation in sensing across sensors and across sensing modalities on sensing capacity for various degrees and models of correlation. Our next main contribution is that we show the effect of sensing diversity on sensing capacity, an effect that has not been considered before. Sensing diversity is related to the effective coverage of a sensor with respect to the field. In this direction we show the following results (a) Sensing capacity goes down as sensing diversity per sensor goes down; (b) Random sampling (coverage) of the field by sensors is better than contiguous location sampling (coverage). In essence the bounds and the results presented in this paper serve as guidelines for designing efficient sensor network architectures.

I also found Learning Compressed Sensing by Yair Weiss, Hyun Sung Chang and William Freeman
The abstract reads:

Compressed sensing [7], [6] is a recent set of mathematical results showing that sparse signals can be exactly reconstructed from a small number of linear measurements. Interestingly, for ideal sparse signals with no measurement noise, random measurements allow perfect reconstruction while measurements based on principal component analysis (PCA) or independent component analysis (ICA) do not. At the same time, for other signal and noise distributions, PCA and ICA can significantly outperform random projections in terms of enabling reconstruction from a small number of measurements. In this paper we ask: given a training set typical of the signals we wish to measure, what are the optimal set of linear projections for compressed sensing ? We show that the optimal projections are in general not the principal components nor the independent components of the data, but rather a seemingly novel set of projections that capture what is still uncertain about the signal, given the training set. We also show that the projections onto the learned uncertain components may far outperform random projections. This is particularly true in the case of natural images, where random projections have vanishingly small signal to noise ratio as the number of pixels becomes large.
This is an important paper as it expand ICA/PCA to underdetermined systems through the use of they call Uncertain Component Analysis.



Unrelated to the previous subject, Morten Morup shows in this poster the extension and state of the art technique in Non-Negative Matrix Factorization. The poster (entitled Extensions of non negative matrix factorization NMF to Higher Order Data) shows where the use of the regularizing ell1 norm provides the ability to do Sparse Coding NMF i.e. find the sparsest positive factorization. Ever since the Nature Article of Daniel Lee and Sebastian Seung (Learning the parts of objects by non-negative matrix factorization), much has been done to improve the subject by Non Negative Matrices. And so while the work of Morup is extremely important, it would seem that the positivity can be assured only by an argument of sparsity. This is the intriguing especially in light of this article by Alfred Bruckstein, Michael Elad, and Michael Zibulevsky entitled "A Non-Negative and Sparse Enough Solution of an Underdetermined Linear System of Equations is Unique". The abstract reads:

In this paper we consider an underdetermined linear system of equations Ax = b with non-negative entries of A and b, and the solution x being also required to be non- negative. We show that if there exists a sufficiently sparse solution to this problem, it is necessarily unique. Furthermore, we present a greedy algorithm - a variant of the matching pursuit - that is guaranteed to find this sparse solution. We also extend the existing theoretical analysis of the basis pursuit problem, i.e. min kxk1 s.t. Ax = b, by studying conditions for perfect recovery of sparse enough solutions. Considering a matrix A with arbitrary column norms, and an arbitrary monotone element-wise concave penalty replacing the ell1-norm objective function, we generalize known equivalence results. Beyond the desirable generalization that this result introduces, we show how it is exploited to lead to the above-mentioned uniqueness claim.


In other words, if it is sparse enough, then it is unique and one can drop the L1 penalty constraint as a non negativity constraint is sufficient. Wow. (Thanks Jort for the link.)

In a different area, Michael Grant, Stephen Boyd, and Yinyu Ye released a new build (Version 1.1, Build 565) for CVX ( Matlab Software for Disciplined Convex Programming).

No comments:

Post a Comment