Pages

Monday, August 15, 2011

Compressive Sensing Literature This Week


I am slowly catching up on several papers I have seen in the past few weeks. First there is a tutorial on compressed sensing taking place in Sarajevo at the end of October by Almir Mutapcic. Second, we have a an interesting presentation on improving the speed of testing chips for malicious hardware insertion.  


Here are two presentation made at IGARSS 11:

DISPLACED PHASE CENTER ANTENNA SAR IMAGING BASED ON COMPRESSED SENSING by Yueguan Lin, Bingchen Zhang, Wen Hong, Yirong Wu.

and  High Resolution SAR Imaging Using Random Pulse Timing by Dehong Liu

Then we have a few papers:

We consider the estimation of multiple room impulse responses from the simultaneous recording of several known sources. Existing techniques are restricted to the case where the number of sources is at most equal to the number of sensors. We relax this assumption in the case where the sources are known. To this aim, we propose statistical models of the filters associated with convex log-likelihoods, and we propose a convex optimization algorithm to solve the inverse problem with the resulting penalties. We provide a comparison between penalties via a set of experiments which shows that our method allows to speed up the recording process with a controlled quality tradeoff.

Secrecy using Compressive Sensing by Shweta Agrawal and Sriram Vishwanath. The abstract reads:
This paper uses the compressive sensing framework to establish secure physical layer communication over a Wyner wiretap channel. The idea, at its core, is simple - the paper shows that compressive sensing can exploit channel asymmetry so that a message, encoded as a sparse vector, is decodable with high probability at the legitimate receiver while it is impossible to decode it with high probability at the eavesdropper.

Real-time encoding and error-resilient wireless transmission of multimedia content require high processing and transmission power. This paper investigates the rate-distortion performance of video transmission over lossy wireless links for low-complexity multimedia sensing devices with a limited budget of available energy per video frame. An analytical/empirical model is developed to determine the received video quality when the overall energy allowed for both encoding and transmitting each frame of a video is fixed and the received data is affected by channel errors. The model is used to compare the received video quality, computation time, and energy consumption per frame of different wireless streaming systems. Furthermore, it is used to determine the optimal allocation of encoded video rate and channel encoding rate for a given available energy budget. The proposed model is then applied to compare the energy constrained wireless streaming performance of three encoders suitable for a wireless multimedia sensor network environment; H.264, motion JPEG (MJPEG) and our recently developed compressed sensing video encoder (CSV). Extensive results show that CSV, thanks to its low complexity, and to a video representation that is inherently resilient to channel errors, is able to deliver video at good quality (an SSIM value of 0.8) through lossy wireless networks with lower energy consumption per frame than competing encoders

We propose a low delay/complexity sensor system based on the combination of Shannon-Kotel’nikov mapping and compressed sensing (CS). The proposed system uses a 1:2 nonlinear analog coder on the CS measurements in the presence of channel noise. It is shown that the purely-analog system, used in conjunction with either maximum a-posteriori or minimum mean square error decoding, outperforms the following reference systems in terms of signal-to-distortion ratio: 1) a conventional CS system that assumes noiseless transmission, and 2) a CS-based system which accounts for channel noise during signal reconstruction. The proposed system is also shown to be advantageous in requiring fewer sensors than the reference systems.

Fast terahertz reflection tomography using block-based compressed sensing by Sang-Heum Cho, Sang-Hun Lee, Chan Nam-Gung, Seoung-Jun Oh, Joo-Hiuk Son, Hochong Park, and Chang-Beom Ahn. The abstract reads:
In this paper, a new fast terahertz reflection tomography is proposed using block-based compressed sensing. Since measuring the time-domain signal on two-dimensional grid requires excessive time, reducing measurement time is highly demanding in terahertz tomography. The proposed technique directly reduces the number of sampling points in the spatial domain without modulation or transformation of the signal. Compressed sensing in spatial domain suggests a block-based reconstruction, which substantially reduces computational time without degrading the image quality. An overlap-average method is proposed to remove the block artifact in the block-based compressed sensing. Fast terahertz reflection tomography using the block-based compressed sensing is demonstrated with an integrated circuit and parched anchovy as examples.

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach by Haneet Wason, Felix J. Herrmann and Tim T.Y. Lin. The summary reads:

Seismic data acquisition forms one of the main bottlenecks in seismic imaging and inversion. The high cost of acquisition work and collection of massive data volumes compel the adoption of simultaneous-source seismic data acquisition - an emerging technology that is developing rapidly, stimulating both geophysical research and commercial efforts. Aimed at improving the performance of marine- and land-acquisition crews, simultaneous acquisition calls for development of a new set of design principles and post-processing tools. Leveraging developments from the field of compressive sensing the focus here is on simultaneous-acquisition design and sequential-source data recovery. Apart from proper compressive sensing sampling schemes, the recovery from simultaneous simulations depends on a sparsifying transform that compresses seismic data, is fast, and reasonably incoherent with the compressive-sampling matrix. Using the curvelet transform, in which seismic data can be represented parsimoniously, the recovery of the sequential-source data volumes is achieved using the sparsity-promoting program — SPGL1, a solver based on projected spectral gradients. The main outcome of this approach is a new technology where acquisition related costs are no longer determined by the stringent Nyquist sampling criteria.

Compressed Sensing for Real-Time Energy-Efficient ECG Compression on Wireless Body Sensor Nodes by Hossein Mamaghanian, Nadia Khaled, David Atienza, and Pierre Vandergheynst. The abstract reads:

Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric tele-cardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility and safety. However, state-of-the-art WBSN-enabled electrocardiogram (ECG) monitors still fall short of the required functionality, miniaturization and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer TM WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37:1% extension in node lifetime relative to its DWT-based counterpart for ”good” reconstruction quality



The following has only an abstract:
Direct inference of protein–DNA interactions using compressed sensing methods by Mohammed AlQuraishi, and Harley H. McAdams. The abstract reads:

Compressed sensing has revolutionized signal acquisition, by enabling complex signals to be measured with remarkable fidelity using a small number of so-called incoherent sensors. We show that molecular interactions, e.g., protein–DNA interactions, can be analyzed in a directly analogous manner and with similarly remarkable results. Specifically, mesoscopic molecular interactions act as incoherent sensors that measure the energies of microscopic interactions between atoms. We combine concepts from compressed sensing and statistical mechanics to determine the interatomic interaction energies of a molecular system exclusively from experimental measurements, resulting in a “de novo” energy potential. In contrast, conventional methods for estimating energy potentials are based on theoretical models premised on a priori assumptions and extensive domain knowledge. We determine the de novo energy potential for pairwise interactions between protein and DNA atoms from (i) experimental measurements of the binding affinity of protein–DNA complexes and (ii) crystal structures of the complexes. We show that the de novo energy potential can be used to predict the binding specificity of proteins to DNA with approximately 90% accuracy, compared to approximately 60% for the best performing alternative computational methods applied to this fundamental problem. This de novo potential method is directly extendable to other biomolecule interaction domains (enzymes and signaling molecule interactions) and to other classes of molecular interactions.

The last paper is not about compressive sensing per se but is relevant: Information Complexity and Estimation by Dror Baron. The abstract reads:
We consider an input $x$ generated by an unknown stationary ergodic source $X$ that enters a signal processing system $J$, resulting in $w=J(x)$. We observe $w$ through a noisy channel, $y=z(w)$; our goal is to estimate x from $y$, $J$, and knowledge of $f_{Y|W}$. This is universal estimation, because $f_X$ is unknown. We provide a formulation that describes a trade-off between information complexity and noise. Initial theoretical, algorithmic, and experimental evidence is presented in support of our approach.

The relevant talk is here.

Credit: NASA / JPL / Cornell / Damien Bouic
Endeavour's eastern rim, Opportunity sol 2678
The peaks of Endeavour's rim march into the distance in this view taken by Opportunity on sol 2678. Via Emilie's blog

No comments:

Post a Comment