Pages

Saturday, June 05, 2010

CS: The long post of the week: YALL1 and more.


Yin Zhang let me know the following:

Dear Igor,

You probably already know that we just released YALL1 v1.0 with source.
The site has been moved to http://yall1.blogs.rice.edu/

Now the code is being managed and maintained mainly by Wotao Yin (Rice) and Junfen Yang (Nanjing University).

From the site, I note the following pages:

Thanks Yin.

Here is an intro to an IEEE special issue entitled: Applications of Sparse Representation and Compressive Sensing By Richard Baraniuk, Emmanuel Candes, Michael Elad, Yi Ma. It starts as:

In the past several years, there have been exciting breakthroughs in the study of high-dimensional sparse signals. A sparse signal is a signal that can be represented as a linear combination of relatively few base elements in a basis or an overcomplete dictionary. Much of the excitement centers around the discovery that under surprisingly broad conditions, a sufficiently sparse linear representation can be correctly and efficiently computed by greedy methods and convex optimization (i.e., the l_1 - l_0 equivalence), even though this problem is extremely difficultVNP-hard in the general case. Further studies have shown that such high-dimensional sparse signals can be accurately recovered from drastically smaller number of (even randomly selected) linear measurements, hence the catch phrase compressive sensing. If these are not surprising enough, more recently, the same analytical and computational tools have seen similarly remarkable successes in advancing the study of recovering high-dimensional low-rank matrices from highly incomplete, corrupted, and noisy measurements.

The Rice Repository has now set up an automatic way for authors to submit their new or corrected papers to the current listing. From the page:
Submitting a Resource

To submit a new or corrected paper for this listing, please complete the form at dsp.rice.edu/cs/submit. To submit a resource that isn't a paper, please email e dot dyer at rice dot edu.


NP-hard problems such as l_o cannot be solved by quantum computers. D-Wave seems to think otherwise.


Stephen Wright just put out two new presentations on his page:
Here is the long list of papers my webcrawler found on the web related in some or another to compressive sensing. Enjoy!

This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms Theory and Practice by Zachary Harmany, Roummel Marcia, and Rebecca Willett. The abstract reads:
The observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where (a) the number of unknowns may potentially be larger than the number of observations and (b) f* admits a sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.
On Roummel F. Marcia's webpage one can read;
Opportunities: Positions for graduate and undergraduate students are available in the areas of optimization, linear algebra, and compressed sensing. These positions are in conjunction with the National Science Foundation grant, DMS-0811062: Second-order methods for large-scale optimization in compressed sensing.
http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0811062

The spectrum sensing performance of Cognitive Radios (CRs) considering noisy signal measurements and the time domain transmission statistics of the Primary User (PU) is considered in this paper. When the spectrum is linearly swept in the frequency domain continuously to detect the presence of the PU the time-domain statistics of the PU plays an important role in the detection performance. This is true especially when the PU’s bandwidth is much smaller than the CR’s scanning frequency range.We model the transmission statistics that is the temporal characteristics of the PU as a Poisson arrival process with a random occupancy time. The spectrum sensing performance at the CR node is then theoretically analyzed based on noisy envelope detection together with the time domain spectral occupancy statistics. The miss detection and false alarm probabilities are derived from the considered spectral occupancy model and the noise model, and we present simulation results to verify our theoretical analysis. We also study the minimum required sensing time for the wideband CR to reliably detect the narrowband PU with a given confidence level considering its temporal characteristics.

We consider unbiased estimation of a sparse nonrandom vector corrupted by additive white Gaussian noise. We show that while there are infinitely many unbiased estimators for this problem, none of them has uniformly minimum variance. Therefore, we focus on locally minimum variance unbiased (LMVU) estimators. We derive simple closed-form lower and upper bounds on the variance of LMVU estimators or, equivalently, on the Barankin bound (BB). Our bounds allow an estimation of the threshold region separating the low-SNR and high-SNR regimes, and they indicate the asymptotic behavior of the BB at high SNR. We also develop numerical lower and upper bounds which are tighter than the closed-form bounds and thus characterize the BB more accurately. Numerical studies compare our characterization of the BB with established biased estimation schemes, and demonstrate that while unbiased estimators perform poorly at low SNR, they may perform better than biased estimators at high SNR. An interesting conclusion of our analysis is that the high-SNR behavior of the BB depends solely on the value of the smallest nonzero component of the sparse vector, and that this type of dependence is also exhibited by the performance of certain practical estimators.

Shrinkage Without Thresholding: L1 Norm Minimization using the Landweber Iteration by Andy Yagle. The abstract reads:
We use the Landweber iteration to compute sparse solutions to the underdetermined linear system of equations y=Ax using iterative reweighted least squares (IRLS). We show that shrinkage, not thresholding, is the key to minimizing the LASSO functional. We also show how the Landweber iteration without thresholding can be used instead of linear programming for basis pursuit, and how to accelerate convergence of the Landweber iteration in this case.
The goal is to reconstruct a sparse signal from some, but not all, of its Discrete Fourier Transform (DFT) values. If the signal has K non-zero and real values, a unique solution is determined by any K DFT values, their conjugates, and the DC value, if the DFT order is prime. However, no algorithm is known for this unless the K DFT values are at consecutive frequencies (a total of 2K+1 consecutive values). l1- norm minimization only works if the frequencies are randomly chosen. We present a new algorithm that reconstructs a K-sparse non-negative real-valued signal from any K DFT values, their conjugates, and the DC value, provided that the DFT order is prime and less than 4K. It does not use the l1 norm or pursuit.

Polytope Faces Pursuit is a greedy algorithm that performs Basis Pursuit with similar order complexity to Orthogonal Matching Pursuit. The algorithm adds one basis vector at a time and adopts a path-following approach based on the geometry of the polar polytope associated with the dual Linear Program. Its initial implementation uses the method of Cholesky factorization to update the solution vector at each step, which can be computationally expensive for solving large scale problems as it requires the succesive storage of large matrices. In this paper, we present a different approach using directional updates to estimate the solution vector at each time. The proposed method uses the gradient descent method, reducing the memory requirements and computational complexity. We demonstrate the application of this Gradient Polytope Faces Pursuit algorithm to a source separation problem.

We propose an ℓ1 criterion for dictionary learning for sparse signal representation. Instead of directly searching for the dictionary vectors, our dictionary learning approach identifies vectors that are orthogonal to the subspaces in which the training data concentrate. We study conditions on the coefficients of training data that guarantee that ideal normal vectors deduced from the dictionary are local optima of the criterion. We illustrate the behavior of the criterion on a 2D example, showing that the local minima correspond to ideal normal vectors when the number of training data is sufficient. We conclude by describing an algorithm that can be used to optimize the criterion in higher dimension.

Compressed Sensing with Nonlinear Observations by Thomas Blumensath. The abstract reads:
Compressed sensing is a recently developed signal acquisition technique. In contrast to traditional sampling methods, signi cantly fewer samples are required whenever the signals admit a sparse representation. Crucially, sampling methods can be constructed that allow the reconstruction of sparse signals from a small number of measurements using efficient algorithms. We have recently generalised these ideas in two important ways. We have developed methods and theoretical results that allow much more general constraints to be imposed on the signal and we have also extended the approach to more general Hilbert spaces. In this paper we introduce a further generalisation to compressed sensing and allow for non-linear sampling methods. This is achieved by using a recently introduced generalisation of the Restricted Isometry Property (or the bi-Lipschitz condition) traditionally imposed on the compressed sensing system. We show that, if this more general condition holds for the nonlinear sampling system, then we can reconstruct signals from non-linear
compressive measurements.

Fast Sparse Representation with Prototypes by Jia-Bin Huang and Ming-Hsuan Yang. The abstract reads:
Sparse representation has found applications in numerous domains and recent developments have been focused on the convex relaxation of the `0-norm minimization for sparse coding (i.e., the l_1-norm minimization). Nevertheless, the time and space complexities of these algorithms remain significantly high for large-scale problems. As signals in most problems can be modeled by a small set of prototypes, we propose an algorithm that exploits this property and show that the `1-norm minimization problem can be reduced to a much smaller problem, thereby gaining significant speed-ups with much less memory requirements. Experimental results demonstrate that our algorithm is able to achieve double-digit gain in speed with much less memory requirement than the state-of-the-art algorithms.


Encouraged by the promising application of compressed sensing in signal compression, we investigate its formulation and application in the context of speech coding based on sparse linear prediction. In particular, a compressed sensing method can be devised to compute a sparse approximation of speech in the residual domain when sparse linear prediction is involved. We compare the method of computing a sparse prediction residual with the optimal technique based on an exhaustive search of the possible nonzero locations and the well known Multi-Pulse Excitation, the first encoding technique to introduce the sparsity concept in speech coding. Experimental results demonstrate the potential of compressed sensing in speech coding techniques, offering high perceptual quality with a very sparse approximated prediction residual.

Here is a poster entitled: Acceleration of IDEAL Water-Fat Imaging using Compressed Sensing by S. D. Sharma, H. H. Hu, and Krishna Nayak. The introduction reads:
IDEAL is an iterative technique for separating water and fat signals on a per-voxel basis [1]. Water-fat imaging plays an important role in many clinical applications, including high-spatial resolution 3D knee imaging to characterize bone marrow and cartilage [2], and 3D whole abdomen imaging to quantify fat in adipose tissue depots and organs [3]. However, the long scan times required increase susceptibility to motion artifacts. Thus, water-fat imaging applications can significantly benefit from data acceleration. In this work, we reformulate the IDEAL algorithm to estimate water and fat signals on a whole-image basis [4], and present an approach to integrate Compressed Sensing (CS) [5] into the accelerated water-fat separation framework [6]. We demonstrate up to 3x acceleration using CS-IDEAL

The following papers come from the same lab (webpage here);

This paper investigates the potential of the compressed sensing (CS) paradigm for video streaming in Wireless Multimedia Sensor Networks. The objective is to co-design a low complexity video encoder based on compressed sensing and a rate-adaptive streaming protocol for wireless video transmission. The proposed rate control scheme is designed with the objectives to maximize the received video quality at the receiver and to prevent network congestion while maintaining fairness between multiple video transmissions. Video distortion is represented through analytical and empirical models and minimized based on a new cross-layer control algorithm that jointly regulates the video encoding rate and the channel coding rate at the physical layer based on the estimated channel quality. The end-to-end data rate is regulated to avoid congestion while maintaining fairness in the domain of video quality rather than data rate. The proposed scheme is shown to outperform TCP-Friendly Rate Control (TFRC).

On the Performance of Compressive Video Streaming for Wireless Multimedia Sensor Networks by Scott Pudlewski and Tommaso Melodia. The abstract reads:
This paper investigates the potential of the compressed sensing (CS) paradigm for video streaming in Wireless Multimedia Sensor Networks. The objective is to study performance limits and outline key design principles that will be the basis for cross-layer protocol stacks for efficient transport of compressive video streams. Hence, this paper investigates the effect of key video parameters (i.e., quantization, CS samples per frame, and channel encoding rate) on the received video quality of CS images transmitted through a wireless channels. It is shown that, unlike JPEG-encoded images, CS-encoded images exhibit an inherent resiliency to channel errors, caused by the unstructured image representation; this leads to basically zero loss in image quality for random channel bit error rates as high as 10−4, and low degradation up to 10−3. Furthermore, it is shown how, unlike traditional wireless imaging systems, forward error correction is not beneficial for wireless transmission of CS images. Instead, an adaptive parity scheme that drops samples in error is proposed and shown to improve image quality. Finally, a low-complexity, adaptive video encoder, is proposed that performs low-complexity motion estimation on sensors, thus greatly reducing the amount of data to be transmitted.

Data loss in wireless communications greatly affects the reconstruction quality of a signal. In the case of images, data loss results in a reduction in quality of the received image. Conventionally, channel coding is performed at the encoder to enhance recovery of the signal by adding known redundancy. While channel coding is effective, it can be very computationally expensive. For this reason, a new mechanism of handling data losses in Wireless Multimedia Sensor Networks (WMSN) using Compressed Sensing (CS) is introduced in this paper. This system uses compressed sensing to detect and compensate for data loss within a wireless network. A combination of oversampling and an adaptive parity scheme are used to determine which CS samples contain bit errors, remove these samples and transmit additional samples to maintain a target image quality A study was done to test the combined use of adaptive parity and compressive oversampling to transmit and correctly recover image data in a lossy channel to maintain Quality of Information (QoI) of the resulting images. It is shown that by using the two components, an image can be correctly recovered even in a channel with very high loss rates of 10%. The AP portion of the system was also tested on a software defined radio testbed. It is shown that by transmitting images using a CS compression scheme with AP error detection, images can be successfully transmitted and received even in channels with very high bit error rates.
Credit: ESA/NASA, SOHO

2 comments:

  1. "NP-hard problems such as l_o cannot be solved by quantum computers. D-Wave seems to think otherwise."

    Scott Aaronson has some interesting comments about that:

    http://scottaaronson.com/blog/?p=449

    ReplyDelete
  2. Can you check the link to the paper

    "Shrinkage Without Thresholding: L1 Norm Minimization using the Landweber Iteration" ?

    It points to an empty page.

    Thanks.

    ReplyDelete