Pages

Thursday, December 02, 2010

CS: CMP, Depth, EEGs, l_p versus l_1, JPEG + CS, Some matrix results and a seminar.

We have many papers today, but first Bob Sturm has a new entry on tests he has performed on the CMP reconstruction algorithm in: Recovery of Sparse Signals with Cyclic Matching Pursuit and Compressive Measurements .

Many of the papers are related to matrix completion at the end but with all the talk about 3d in the past few entries, something equally exciting is getting into the business of learning depth information as in Learning sparse representations of depth by Ivana Tosic, Bruno A. Olshausen, Benjamin J. Culpepper. The abstract reads:
We propose a method for learning sparse representations of depth (disparity) maps, which is able to cope with noise and unreliable depth measurements. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding and enables learning from data corrupted with spatially varying noise or uncertainty. Different noise statistics at each pixel location are inferred from the data, and the learning rule is adapted with respect to the noise level. The effectiveness of the method is demonstrated by denoising depth maps obtained from laser range scanners and a time of flight camera. We then propose a two-layer graphical model for inferring depth from stereo by including a sparsity prior on local depth features learned by the algorithm. Depth inference is defined as a maximum aposteriori estimation problem on this graph. In contrast to smoothness priors that are based on pairwise correlations, sparse priors capture higher-order dependencies in the depth structure. Our results demonstrate that sparse priors reduce the depth estimation error obtained by the state of the art graph cut algorithm.

then we have a paper on a continuing interest of mine: Quantifying the performance of compressive sensing on scalp EEG signals by  Amir M. Abdulghani, Alexander J. Casson and Esther Rodriguez-Villegas. The abstract reads: 
Compressive sensing is a new data compression paradigm that has shown significant promise in fields such as MRI. However, the practical performance of the theory very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Electroencephalography (EEG) is a fundamental tool for the investigation of many neurological disorders and is increasingly also used in many non-medical applications, such as Brain-Computer Interfaces. This paper characterises in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals for the first time. The results are of particular interest for wearable EEG communication systems requiring low power, real-time compression of the EEG data.
The next paper has a surprising statement on comparing l_p and l_1 recovery:


On the Performance of Sparse Recovery via L_p-minimization (0 \le p \le 1) by Meng Wang, Weiyu Xu, Ao Tang. The abstract reads:
It is known that a high-dimensional sparse vector x* in R^n can be recovered from low-dimensional measurements y= A^{m*n} x* (m \lt n) . In this paper, we investigate the recovering ability of l_p-minimization (0 \le p \le1) as p varies, where l_p-minimization returns a vector with the least l_p ``norm'' among all the vectors x satisfying Ax=y. Besides analyzing the performance of strong recovery where l_p-minimization needs to recover all the sparse vectors up to certain sparsity, we also for the first time analyze the performance of ``weak'' recovery of l_p-minimization (0 \le p \lt1) where the aim is to recover all the sparse vectors on one support with fixed sign pattern. When m/n goes to 1, we provide sharp thresholds of the sparsity ratio that differentiates the success and failure via l_p-minimization. For strong recovery, the threshold strictly decreases from 0.5 to 0.239 as p increases from 0 to 1. Surprisingly, for weak recovery, the threshold is 2/3 for all p in [0,1), while the threshold is 1 for l_1-minimization. We also explicitly demonstrate that l_p-minimization (p \lt 1) can return a denser solution than l_1-minimization. For any m/n \lt 1, we provide bounds of sparsity ratio for strong recovery and weak recovery respectively below which l_p-minimization succeeds with overwhelming probability. Our bound of strong recovery improves on the existing bounds when m/n is large. Regarding the recovery threshold, l_p-minimization has a higher threshold with smaller p for strong recovery; the threshold is the same for all p for sectional recovery; and l_1-minimization can outperform l_p-minimization for weak recovery. These are in contrast to traditional wisdom that l_p-minimization has better sparse recovery ability than l_1-minimization since it is closer to l_0-minimization. We provide an intuitive explanation to our findings and use numerical examples to illustrate the theoretical predictions.
Image representation by compressive sensing for visual sensor networks by Bing Han, Feng Wu, Dapeng Wu. The abstract reads:
This paper addresses the image representation problem in visual sensor networks. We propose a new image representation method for visual sensor networks based on compressive sensing (CS). CS is a new sampling method for sparse signals, which is able to compress the input data in the sampling process. Combining both signal sampling and data compression, CS is more capable of image representation for reducing the computation complexity in image/video encoder in visual sensor networks where computation resource is extremely limited. Since CS is more efficient for sparse signals, in our scheme, the input image is firstly decomposed into two components, i.e., dense and sparse components; then the dense component is encoded by the traditional approach (JPEG or JPEG 2000) while the sparse component is encoded by a CS technique. In order to improve the rate distortion performance, we leverage the strong correlation between dense and sparse components by using a piecewise autoregressive model to construct a prediction of the sparse component from the corresponding dense component. Given the measurements and the prediction of the sparse component as initial guess, we use projection onto convex set (POCS) to reconstruct the sparse component. Our method considerably reduces the number of random measurements needed for CS reconstruction and the decoding computational complexity, compared to the existing CS methods. In addition, our experimental results show that our method may achieves up to 2dB gain in PSNR over the existing CS based schemes, for the same number of measurements.
Detection of sparse variable functions by Ghislaine Gayraud, Yuri Ingster. The abstract reads:
We consider the problem of detection of smooth high variable signal function in the white noise model. We assume that the struture of the signal function is additive sparse. In order to detect the signal, we wish to test the null hypothesis characterized by no signal against composite nonparametric alternative which is composed of smooth additive sparse signal functions. The testing problem is solved according to the minimax approach. We prove that the results for detection of sparse high dimensional vectors can be extended to the problem under consideration.

The Minimum-Rank Gram Matrix Completion via Fixed Point Continuation Method by Yue Ma, Lihong Zhi. The abstract reads:
In this paper we present a modified FPC-BB algorithm for solving the minimum-rank Gram matrix completion problem, i.e., computing a representation for a positive semidefinite polynomial as a sum of minimum number of squares of polynomials in $Q[x_1,...,x_s]$. We prove the convergence of the algorithm under some conditions. Numerical results show that our algorithm is efficient and robust.

Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size $n$ we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.

Nuclear norm penalization and optimal rates for noisy low rank matrix completion by Vladimir Koltchinskii, Alexandre B. Tsybakov, Karim Lounici. The abstract reads:
This paper deals with the trace regression model where $n$ entries or linear combinations of entries of an unknown $m_1\times m_2$ matrix $A_0$ corrupted by noise are observed. We propose a new nuclear norm penalized estimator of $A_0$ and establish a general sharp oracle inequality for this estimator for arbitrary values of $n,m_1,m_2$ under the condition of isometry in expectation. Then this method is applied to the matrix completion problem. In this case, the estimator admits a simple explicit form and we prove that it satisfies oracle inequalities with faster rates of convergence than in the previous works. They are valid, in particular, in the high-dimensional setting $m_1m_2\gg n$. We show that the obtained rates are optimal up to logarithmic factors in a minimax sense and also derive, for any fixed matrix $A_0$, a non-minimax lower bound on the rate of convergence of our estimator, which coincides with the upper bound up to a constant factor. Finally, we show that our procedure provides an exact recovery of the rank of $A_0$ with probability close to 1. We also discuss the statistical learning setting where there is no underlying model determined by $A_0$ and the aim is to find the best trace regression model approximating the data.


Finally, there was a seminar some days ago on compressive sensing. The reason I am mentioning it is because of the speaker and company:

Compressed Sensing: Overview, and Applications in Imaging
Abdorreza Heidari
T-Ray Science Inc., Waterloo, Ontario

Compressed Sensing or Compressive Sampling (CS) uses a fairly small number of linear projections of a multi-dimensional signal (such as image) to reconstruct the signal. It is an efficient sampling scheme which utilizes sparsity, and improves the acquisition process by not sampling redundant data as is done in conventional sensing methods. The CS approach has been introduced recently, yet it is revolutionizing many fields such as imaging and sensing, data acquisition, compression and coding, etc. Currently there are several active fields in imaging, including MRI, low-light and sensitive cameras, and single-pixel cameras, which are utilizing the tools that mathematicians have developed under the CS umbrella.

A general overview of the compressed sensing method is presented. Conventionally data acquisition process is complex, costly, and time-consuming which is not attractive for practical applications. However, in CS, the acquisition process is fairly simple, and most processing is done at the data recovery. In other words, in conventional sensing methods, we usually have smart encoders and dumb decoders, while in CS, we have dumb encoders and smart decoders. CS acquisition and recovery processes are numerically stable. Furthermore, same software or hardware implementation can be used for any compressible signal class (Universality).

We study the single-pixel camera concept as an example of a CS-based imaging system. A single-pixel camera uses a set of random masks to acquire the projection measurements. To implement a terahertz camera, we propose a novel method for a compact mask design, and prove the feasibility of the method and the image recovery.

No comments:

Post a Comment