Today, we have a pretty rich entry, enjoy: Regime Change: Bit-Depth versus Measurement-Rate in Compressive Sensing by Jason N. Laska and Richard G. Baraniuk. The abstract reads:

The recently introduced compressive sensing (CS) framework enables digital signal acquisition systems to take advantage of signal structures beyond bandlimitedness. Indeed, the number of CS measurements required for stable reconstruction is closer to the order of the signal complexity than the Nyquist rate. To date, the CS theory has focused on real-valued measurements, but in practice, measurements are mapped to bits from a finite alphabet. Moreover, in many potential applications the total number of measurement bits is constrained, which suggests a tradeo ff between the number of measurements and the number of bits per measurement. We study this situation in this paper and show that there exist two distinct regimes of operation that correspond to high/low signal-to-noise ratio (SNR). In the measurement compression (MC) regime, a high SNR favors acquiring fewer measurements with more bits per measurement; in the quantization compression (QC) regime, a low SNR favors acquiring more measurements with fewer bits per measurement. A surprise from our analysis and experiments is that in many practical applications it is better to operate in the QC regime, even acquiring as few as 1 bit per measurement.

Simple and Practical Algorithm for Sparse Fourier Transform by Haitham Hassanieh, Piotr Indyk, Dina Katabi, Eric Price. The abstract reads:

We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coeﬃcients of the Fourier transform of x. The problem is of key interest in several areas, including signal processing, audio/image/video compression, and learning theory. We propose a new algorithm for this problem. The algorithm leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev ﬁlters. Unlike the typical approach to this problem, our algorithm is not iterative. That is, instead of estimating “large” coeﬃcients, subtracting them and recursing on the reminder, it identiﬁes and estimates the k largest coeﬃcients in “one shot”, in a manner akin to sketching/streaming algorithms. The resulting algorithm is structurally simpler than its predecessors. As a consequence, we are able to extend considerably the range of sparsity, k, for which the algorithm is faster than FFT, both in theory and practice.

Dina tells me that they have plans to release the code. Stay tuned.

Random Access Compressed Sensing for Energy-Efﬁcient Underwater Sensor Networks by Fatemeh Fazel, Maryam Fazel and Milica Stojanovic. The abstract reads:

Inspired by the theory of compressed sensing and employing random channel access, we propose a distributed energy-efﬁcient sensor network scheme denoted by Random Access Compressed Sensing (RACS). The proposed scheme is suitable for long-term deployment of large underwater networks, in which saving energy and bandwidth is of crucial importance. During each frame, a randomly chosen subset of nodes participate in the sensing process, then share the channel using random access. Due to the nature of random access, packets may collide at the fusion center. To account for the packet loss that occurs due to collisions, the network design employs the concept of sufﬁcient sensing probability. With this probability, sufﬁciently many data packets – as required for ﬁeld reconstruction based on compressed sensing – are to be received. The RACS scheme prolongs network life-time while employing a simple and distributed scheme which eliminates the need for scheduling.

Bayesian Compressed Sensing in Ultrasound Imaging by Céline Quinsac, Nicolas Dobigeon, Adrian Basarab, Denis Kouamé and Jean-Yves Tourneret. The abstract reads:

Following our previous study on compressed sensing for ultrasound imaging, this paper proposes to exploit the image sparsity in the frequency domain within a Bayesian approach. A Bernoulli-Gaussian prior is assigned to the Fourier transform of the ultrasound image in order to enforce sparsity and to reconstruct the image via Bayesian compressed sensing. In addition, the Bayesian approach allows the image sparsity level in the spectral domain to be estimated, a signiﬁcant parameter in the ℓ1 constrained minimization problem related to compressed sensing. Results obtained with a simulated ultrasound image and an in vivo image of a human thyroid gland show a reconstruction performance similar to a classical compressed sensing algorithm from half of spatial samples while estimating the sparsity level during reconstruction.

Hyperspectral Image Compressed Sensing Via Low Rank-Rank and Joint-Sparse Matrix Recovery by Mohammad Golbabaee and Pierre Vandergheynst. The abstract reads:

A novel approach has been developed to reconstruct the Hyperspectral images from very few number of noisy compressive measurements. Our reconstruction approach is based on a convex minimization which penalizes both the nuclear norm and the `2;1 mixed-norm of the data matrix. Thus, the solution tends to have a simultaneously low-rank and joint-sparse structure. We explain how these two assumptions ﬁt the Hyperspectral data, and by severals simulations we show that our proposed reconstruction scheme signiﬁcantly enhances the state-of-the-art tradeoffs between the reconstruction error and the required number of CS measurements.

Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples by Michael A. Lexa, Mike E. Davies, John S. Thompson. The abstract reads:

This paper presents a novel power spectral density estimation technique for bandlimited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and applies to spectrally sparse and nonsparse power spectra alike. For sparse density functions, we apply compressed sensing theory and the resulting compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. Both compressive and noncompressive estimates, however, can be computed at arbitrarily low sampling rates. The estimator does not require signal reconstruction and can be directly obtained from solving either a least squares or a nonnegative least squares problem. The estimates are piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. The estimates are also statistically consistent. This method is widely applicable, but it is targeted to situations where one wants to sense a wideband spectrum at low sampling rates.

Blind Source Separation with Compressively Sensed Linear Mixtures by Martin Kleinsteuber, Hao Shen. The abstract reads:

This work studies the problem of simultaneously separating and reconstructing signals from compressively sensed linear mixtures. We assume that all source signals share a common sparse representation basis. The approach combines classical Compressive Sensing (CS) theory with a linear mixing model. It allows the mixtures to be sampled independently of each other. If samples are acquired in the time domain, this means that the sensors need not be synchronized. Since Blind Source Separation (BSS) from a linear mixture is only possible up to permutation and scaling, factoring out these ambiguities leads to a minimization problem on the so-called oblique manifold. We develop a geometric conjugate subgradient method that scales to large systems for solving the problem. Numerical results demonstrate the promising performance of the proposed algorithm compared to several state of the art methods.

A New Multiple Description Image Coding Scheme Based on Compressive Sensing by Danhua Liu, Dahua Gao, Guangming Shi. The abstract reads:

This paper proposes a new two-description image coding technique based on Compressive Sensing(CS). In the new multiple description coding scheme, the source image is split into two sub-images with quincunx downsampling operation and further two descriptions are generated by CS measurement. The decoding is accomplished by solving a convex l1 optimization problem. The decoding at each side decoder is done by an interpolation process that exploits inter-description correlation. The new method still makes use of an attractive property of CS that the recovery effect only depends on the received measurement number but not on which measurements are received correctly. Experimental results demonstrate that the proposed image multiple description coding scheme has the advantage of high robustness to packet loss

Dual Branch Compressive Sensing for Analog Signal by Hua Ai, Wenbin Guo, Yang Lu, Xing Wang and Wenbo Wang. The abstract reads:

The framework of Analog-to-Information Converter (AIC) enables the feasibility of applying Compressive Sensing (CS) theory into practice. Based on the AIC structure, sparsesignals are compressed at sub-Nyquist sampling rate, which dramatically reduces the hardware pressure of A/D chip. However, not fully exploiting the inner correlation of complex frequency, AIC structure does not work well in non-ideal environment or some strict conditions. In this paper, we propose a Dual Branch Compressive Sensing (DBCS) framework in light of the symmetry of Discrete Fourier Transform (DFT). By recovering the real and imaginary parts of complex-valued frequency respectively, we ﬁnd a way to remove redundancy in reconstructing a complex frequency. Numerical experiments have shown the DBCS structure improves the performance of anti-noise and compressibility remarkably.

A compressed sensing approach for partial diﬀerential equations with random input data by Lionel Mathelin and Kyle A. Gallivan. The abstract reads:

In this paper, a novel approach for quantifying the parametric uncertainty associated with a stochastic problem output is presented. As with Monte-Carlo and stochastic collocation methods, only point-wise evaluations of the stochastic output response surface are required allowing the use of legacy deterministic codes and precluding the need for any dedicated stochastic code to solve the uncertain problem of interest. The new approach diﬀers from these standard methods in that it is based on ideas directly linked to the recently developed compressed sensing theory. The technique allows the retrieval of the modes that contribute most signiﬁcantly to the approximation of the solution using a minimal amount of information. The generation of this information, via many solver calls, is almost always the bottle-neck of an uncertainty quantiﬁcation procedure. If the stochastic model output has a reasonably compressible representation in the retained approximation basis, the proposed method makes the best use of the available information and retrieves the dominant modes. Uncertainty quantiﬁcation of the solution of both a 2-D and 8-D stochastic Shallow Water problem is used to demonstrate the signiﬁcant performance improvement of the new method, requiring up to several orders of magnitude fewer solver calls than the usual sparse grid-based Polynomial Chaos (Smolyak scheme) to achieve comparable approximation accuracy

Energy Efficiency and Reliability Considerations of a Compressive Sensing Technique in Wireless Visual Sensor Networks by Yinhao Ding, Cheng-Chew Lim. The abstract reads:

This chapter focuses on the energy efficiency and reliability issues when applying the novel compressive sensing technique in wireless visual sensor networks. An explanation is given for why compressive sensing is useful for visual sensor networks. The relationships between sparsity control and compression ratio, the effect of block-based sampling on reconstruction quality, complexity consideration of reconstruction process for real-time applications, and compensation for packets missing in network flows are discussed. We analyse the effectiveness of using the 2-dimensional Haar wavelet transform for sparsity control, the difference between compressive sampling in spatial and frequency domains, and the computation of the prime-dual optimisation method and the log barrier algorithm for reconstruction. The effectiveness of the approach on recovered image quality is evaluated using Euclidean distance and variance analysis

One of the main issue with regards to dealing directly with compressed measurements of moving images is embedded in non differentiability issue. Looks like the following paper has gotten around that issue: Go with the flow: Optical flow-based transport operators for image manifolds, by Aswin C. Sankaranarayanan, Chinmay Hegde, Sriram Nagaraj and Richard G. Baraniuk. The abstract reads:

Image articulation manifolds (IAMs) play a central conceptual role in a host of computer vision and image understanding problems. The core premise is that we can view a collection of images, each of which is indexed by a small number of degrees of freedom (3D camera pose, motion/deformation, etc.), as a low-dimensional nonlinear manifold. In order to perform parameter estimation and navigation on an IAM, we require a transport operator that traverses the manifold from image to image. The two current approaches to manifold transport suffer from major shortcomings that have limited the practical impact of manifold methods. First, algebraic methods require that the IAM possess an unrealistic algebraic structure. Second, locally linear methods based on a tangent plane approximation cannot cope with the non-differentiability of IAMs containing images with sharp edges. In this paper, we demonstrate that the optical flow between pairs of images on an IAM is a valid transport operator with a number of attractive properties. In particular, we establish that the optical flow forms a low-dimensional smooth manifold. Several experiments involving novel-view synthesis, geometric clustering, and manifold charting validate that the optical flow manifold approach both offers performance significantly superior to current approaches and is practical for real-world applications.Rich has more on his blog.

Here is an abstract that caught my interest: Stochastic PDEs, Sparse Approximations, and Compressive Sampling An Enabling Computational Framework for Uncertainty Assimilation and Propagation in Complex PDE Systems: Sparse and Low-rank Techniques by Alireza Doostan. The abstract reads:

Uncertainty quantiﬁcation (UQ) is an inevitable part of any predictive modeling practice. Intrinsic variabilities and lack of knowledge about system parameters or governing physical modelsoften considerably aﬀect quantities of interest and decision-making processes. Eﬃcient representation and propagation of such uncertainties through complex PDE systems are subjects of growinginterests, especially for situations where a large number of uncertain sources are present. Onemajor diﬃculty in UQ of such systems is the development of non-intrusive approaches in whichdeterministic codes are used in a black box fashion, and at the same time, solution structures areexploited to reduce the number of deterministic runs.In this presentation, we ﬁrst discuss some sparse sampling/collocation techniques for the solutionof PDEs with high-dimensional random inputs. We then introduce recent work on extensions ofcompressive sampling techniques for the solution of stochastic PDEs. We show that the samplingof solution is non-adapted and can be done with any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity of solution and a concentration of measure phenomenon on the empirical correlation between samples. We demonstrate that the method is well suited for PDEs with high-dimensional random inputs.

The thesis of Sadiq Jafar Sadiq entitled: Application of Compressive Sensing and Belief Propagation for Channel Occupancy Detection in Cognitive Radio Networks . The abstract reads:

Wide-band spectrum sensing is an approach for ﬁnding spectrum holes within a wideband signal with less complexity/delay than the conventional approaches. In this thesis, we propose four diﬀerent algorithms for detecting the holes in a wide-band radio spectrum and ﬁnding the sparsity level of compressive signals. The ﬁrst algorithm estimates the spectrum in an eﬃcient manner and uses this estimation to ﬁnd the holes. This approach adds a new dimension to the scenario through ignoring speciﬁc portions of the time samples. The second algorithm detectes the spectrum holes by reconstructing channel energies instead of reconstructing the spectrum itself. In this method, the signal is fed into a number of ﬁlters, less than the number of available channels. The energies of the ﬁlter outputs are then used as the compressed measurement to reconstruct the signal energy in each channel. The third algorithm employes two information theoretic algorithms (MDL and PDL) to ﬁnd the sparsity level of a compressive signal and the last algorithm employs belief propagation for detecting the sparsity level. The performance of these algorithms is investigated through simulations.And finally we have a call for papers:

Compressed sensing (CS) techniques offer a framework for the detection and allocation of sparse signals with a reduced number of samples. Today, modern radar systems operate with high bandwidths - demanding high sample rates according to the Shannon-Nyquist theorem - and a huge number of single elements for phased array antennas. Often only a small amount of target parameters is the final output, raising the question, whether CS could be a good means to reduce data size, complexity, weight, power consumption and costs of radar systems. The amount of publications addressing the application of CS to radar is still limited, leaving open a number of questions.

papers:

CoSeRa 20121st International Workshop on Compressed Sensing Applied to Radar14.-16. May 2012, Bonn, Germany

Call for Papers

Radar/SAR awaits Compressed Sensing

ScopeThe scope of the proposed International Workshop is to bring experts of Compressed Sensing together to explore the state-of-the-art in development of such techniques in the different nations and for the different applications and to turn out its advantages or possible drawbacks compared to classical solutions. The workshop program will include invited speeches from distinguished experts as well as contributed talks.

Keynote SpeakersThomas Strohmer, Professor, Department of Mathematics, University of California, Davis, USA. A world-leading scientist in the field of Compressed Sensing and its applications to signal processing problems. http://www.math.ucdavis.edu/~strohmer/

Gilda Schirinzi, Professor, Engineering Faculty, LIT Laboratory, University Of Cassino, I. A leading scientist in the field of Synthetic Aperture Radar and Compressive Sensing. http://www2.lit.ing.unicas.it/People_GildaSchirinzi.htm

Key aspectsContributions are expected on, but not limited to:

- CS for pulse compression

- CS for synthetic aperture radar (SAR)

- CS for SAR tomography

- CS for active and passive airspace surveillance

- CS for moving target detection

- CS for radar clutter suppression

- CS for MIMO architectures

- Hardware aspects of CS

- CS for radiometry and sonar

- Mathematical aspects of CS in radar
ParticipantsThe workshop will provide a forum for experts, research engineers, and scientists working in the area of Compressive Sensing and Radar/SAR.

- CS in statistical signal processing

They get insight into the current research trends, innovative sensor technology, associated signal processing, and the subsequent data processing and transmission steps.

ContributionsAuthors are encouraged to submit a one-page abstract in English. All submissions has to performed via e-mail. Every submitted papers will be distributed on a DVD during the workshop. The authors of presented papers are invited to contribute to a Special Issue on Compressed Sensing applied to Radar in the IEEE JOURNAL of SELECTED TOPICS ON SIGNAL PROCESSING of the IEEE signal processing society. Authors will be kept informed via e-mail once they have registered their papers. For help, please send an e-mail to matthias.weiss@fhr.fraunhofer.de.

Important Dates

19. Feb. 2012 Submission of abstract 05. March 2012 Notification of acceptance 01. April 2012 Registration deadline 14.-16. May 2012 International Workshop COSERA 2012

OrganisationJ._Ender, Fraunhofer FHR, DE; H._Rauhut, Univ. Bonn, DE; L._Prünte, Fraunhofer FHR, DE; M._Weiß, Fraunhofer-FHR, DE; J._Fiege, Fraunhofer FHR, DE; L._Anitori, TNO, NL.

Location & VenueThe convention center (Universitätsclub Bonn) is located near the University (Konviktstr. 9, 53113 Bonn, Germany) in a park area.

A history of more than 2000 years has given Bonn most varied facets. Historical sights can be spotted through out the city. The flair of international life and picturesque impressions along the romantic Rhine are waiting to be discovered.

In case of problems, do not hesitate to contact matthias.weiss@fhr.fraunhofer.de

Image Credit: NASA/JPL/Space Science Institute: Full-Res: N00177043.jpg

N00177043.jpg was taken on October 03, 2011 and received on Earth October 04, 2011. The camera was pointing toward SATURN at approximately 1,095,958 kilometers away, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated.

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment