Friday, September 06, 2013

The Long Post of the Week,

I am slowly catching up on the pile of Arxiv papers I haven't covered yet, enjoy this batch: 

In this paper, we explore various sparse regularization techniques for analyzing fMRI data, such as LASSO, elastic net and the recently introduced k-support norm. Employing sparsity regularization allow us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. We test these methods on real data of both healthy subjects as well as cocaine addicted ones and we show that although LASSO has good prediction, it lacks interpretability since the resulting model is too sparse, and results are highly sensitive to the regularization parameter. We find that we can improve prediction performance over the LASSO using elastic net or the k-support norm, which is a convex relaxation to sparsity with an L2 penalty that is tighter than the elastic net. Elastic net and k-support norm overcome the problem of overly sparse solutions, resulting in both good prediction and interpretable solutions, while the k-support norm gave better prediction performance. Our experimental results support the general applicability of the k-support norm in fMRI analysis, both for prediction performance and interpretability.

Sparse BLIP: Compressed Sensing with Blind Iterative Parallel Imaging Huajun She, Rong-Rong Chen, Dong Liang, Edward DiBella, and Leslie Ying
INTRODUCTION: In SENSE-based compressed sensing (CS) methods for parallel imaging such as SparseSENSE (1) and CS-SENSE (2), coil sensitivities are usually obtained from low resolution images or through a separate scan, where accuracy cannot be guaranteed. Inaccuracy in coil sensitivities may lead to serious image artifacts at high acceleration factors. GRAPPA-based CS methods such as L1-SPIRiT (3) avoid explicit use of coil sensitivities, but the channel-by-channel reconstruction is computationally intensive and may also cause non-uniform intensity. Motivated by the success of JSENSE (4,5) in improving coil sensitivities estimation, we propose a new approach to blind compressed sensing in the context of parallel imaging where the sensing matrix is not known exactly and needs to be reconstructed. We name the method Sparse BLIP (compressed sensing with Blind Iterative Parallel imaging). The proposed method effectively incorporates the sparseness of both the desired image and coil sensitivities in reconstruction. The proposed method is compared with SparseSENSE and L1-SPIRiT and demonstrates a significant improvement in image quality at high reduction factors.
Radio Tomographic Imaging and Tracking of Stationary and Moving People via Kernel Distance by Yang Zhao, Neal Patwari Jeff M. Phillips, Suresh Venkatasubramanian
Network radio frequency (RF) environment sensing (NRES) systems pinpoint and track people in buildings using changes in the signal strength measurements made by a wireless sen-sor network. It has been shown that such systems can locate people who do not participate in the system by wearing any radio device, even through walls, because of the changes that moving people cause to the static wireless sensor network. However, many such systems cannot locate stationary people. We present and evaluate a system which can locate stationary or moving people, without calibration, by using kernel distance to quantify the di erence between two histograms of signal strength measurements. From ve experiments, we show that our kernel distance-based radio tomographic localization system performs better than the state-of-the-art NRES systems in di erent non line-of-sight environments.
Sensors of most digital cameras are made of silicon that is inherently sensitive to both the visible and near-infrared parts of the electromagnetic spectrum. In this paper, we address the problem of color and NIR joint acquisition. We propose a framework for the joint acquisition that uses only a single silicon sensor and a slightly modified version of the Bayer color-filter array that is already mounted in most color cameras. Implementing such a design for an RGB and NIR joint acquisition system requires minor changes to the hardware of commercial color cameras. One of the important differences between this design and the conventional color camera is the post-processing applied to the captured values to reconstruct full resolution images. By using a CFA similar to Bayer, the sensor records a mixture of NIR and one color channel in each pixel. In this case, separating NIR and color channels in different pixels is equivalent to solving an under-determined system of linear equations. To solve this problem, we propose a novel algorithm that uses the tools developed in the field of compressive sensing. Our method results in high-quality RGB and NIR images (the average PSNR of more than 30 dB for the reconstructed images) and shows a promising path towards RGB and NIR cameras. 

OIRS: Outsourced Image Recovery Service from Compressive Sensing with Privacy Assurance  by Cong Wang, Zhen Xu, Janet Wang

There is a fast-growing trend to outsource the large-scale image management systems to cloud today, where abundant computing resources [1] can be leveraged to efficiently and effectively store, process, and share images among data owners and users [7]. However, for the image service outsourcing to be truly successful, there are still fundamental challenges yet to overcome. Firstly, because the cloud is a public environment operated by external third-parties usually outside the data owner/users trusted domain, the outsourcing design has to be privacy-protecting and sometimes
mandatorily provide legal compliance to various privacy regulations [4]. Secondly, due to the high-dimensionality and large-scale of the image datasets, it is both necessary and desirable for the outsourcing design to be as efficient and less resource-consuming as possible in order to keep
the cloud economically attractive. To address these fundamental challenges, we investigate a novel outsourced image recovery service (OIRS) architecture in this paper, which exploits techniques from different domains and takes security, complexity, and efficiency into consideration from the very beginning of the service flow. Particularly, OIRS is designed under the compressive sensing framework [3], a recent data sensing/sampling paradigm known for its simplicity of unifying the traditional sampling and compression for image acquisition. As shown in Fig. 1, data owners only need to outsource compressed image samples to cloud for reduced storage overhead and
simplified local sensing, while data users can leverage the cloud’s computing resources to securely reconstruct images without revealing information from the image samples or the underlying image content. OIRS has the benefit of saving owner/users’ workloads in the image recovery computation. In addition, it also incurs comparable amount of storage overhead and computational complexity at cloud as current mechanism does without security consideration. Below we introduce the main idea of OIRS design, covering the cases of sparse data, non-sparse data, and sampling with
noises. Our preliminary analysis and experiments validate the security and efficiency of our design.
Privacy-Preserving Nearest Neighbor Methods Shantanu Rane and Petros Boufounos
COMPARING two signals is one of the most essential and prevalent tasks in signal processing. A large number of applications fundamentally rely on determining the answers to the following two questions: (1) How should two signals be compared? (2) Given a set of signals and a query signal,
which signals are the nearest neighbors of the query signal, i.e., which signals in the database are most similar to the query signal? The nearest neighbor (NN) search problem is defined as follows: Given a set S containing points in a metric space M, and a query point x 2 M, find the point in S that is
closest to x. The problem can be extended to K-NN, i.e., determining the K nearest neighbors of x. In this context, the ‘points’ in question are signals, such as images, videos or other waveforms. The qualifier ‘closest’ refers to a distance metric, such as the Euclidean distance or Manhattan distance between pairs of points in S. Finding the nearest neighbor of the query point should be at most linear in the database size and is a well-studied problem in conventional NN settings. However, this problem much less straightforward when the signals under consideration are private, i.e., they cannot be revealed to the party conducting the NN search. At first glance, the problem appears to be a non-starter when privacy constraints are imposed: Is it even possible to find the distance between two signals without knowing what the signals are? Is it possible to determine the minimum of these distances without knowledge of the distances themselves? Fortunately, the answer to both these questions is affirmative. The intent of this article is to provide a tutorial survey of the methods that have been developed to answer these questions, i.e., to solve the NN search problem under a variety of privacy constraints. While thinking of a Privacy-Preserving Nearest Neighbor (PPNN) method, the reader will find it convenient to divide it into two distinct problems: a method for Privacypreserving (PP) distance computation followed by a method for PP minimum finding, as shown in Fig. 1. PPNN problems arise repeatedly in many of today’s emerging applications: secure biometric authentication [1, 2], privacy-preserving face recognition [3, 4], private recommender systems [5], privacy preserving speech processing [6], and many others. The choice of mathematical tools used for PPNN, as well as the structure and complexity of the resulting protocols are dictated by the privacy model under consideration. These models encapsulate assumptions such as the privacy requirements, the behavior of participating entities and the possibility of information sharing among the participants.
The privacy guarantee can be based on the assumption that certain problems in number theory, e.g., factorization of large numbers, are intractable for an adversary with limited computational resources. It is important to note that many of these assumptions are hitherto unproven, and that a computational privacy guarantee may be vulnerable to a quantum computer in the future. A more powerful privacy guarantee, based on information theory, ensures that an adversary cannot compromise privacy of any party without knowing a specific key, even if he has unlimited computational resources. In this
article, we cover examples of both kinds of cryptographic tools, i.e., we consider both computationally private and information-theoretically private protocols. The parties in a privacy-preserving protocol may behave in a honest but curious fashion, i.e., they may follow the rules of the protocol, but can use all the information made available to them by the protocol and try to guess the data held by other parties. A more severe assumption on the behavior of the parties is that they may be malicious, i.e., they may arbitrarily deviate from the rules of the protocol with the aim of discovering data held by other parties. Furthermore, two or more parties in a protocol may choose to collude with the goal of forcing the protocol to generate a wrong result, or to compromise the data owned by an unsuspecting party. Naturally, it is more difficult to design protocols that are secure
against colluders and malicious parties. In this article, we will concentrate primarily on honest but curious parties—also called semi-honest parties—for two reasons: Firstly, much of the recent surge of secure signal processing research has considered semi-honest, non-colluding parties. Secondly, our goal with this paper is to provide a clear understanding of the basic methods and tools for PPNN protocol design. Many of these methods can later be strengthened to accommodate malicious adversaries via verifiable secret sharing and zero knowledge proofs, if the steep increase in protocol complexity is acceptable. The remainder of this article is organized as follows: In the next section, we describe PPNN tools and protocols in which privacy guarantees are based on computationally intractable problems. Following this section, we describe PPNN tools and protocols with information-theoretic privacy. Then, we discuss the important role that dimensionality-reducing embeddings play in PPNN, either by reducing the complexity of previously discussed cryptographic protocols, or by providing privacy guarantees of their own. Towards the end of the article, we briefly discuss the design of protocols resistant to malicious colluders and discuss the most interesting open problems in the field.
Vectorial Phase Retrieval of 1-D signals Oren Raz, Nirit Dudovich and Boaz Nadler∗, member
Abstract—Reconstruction of signals from measurements oftheir spectral intensities, also known as the phase retrieval problem, is of fundamental importance in many scientific fields.In this paper we present a novel framework, denoted as vectorial phase retrieval, for reconstruction of pairs of signals from spectral intensity measurements of the two signals and of the interference. We show that this new framework can alleviate some of the theoretical and computational challenges associated with classical phase retrieval from a single signal. First, we prove that for compactly supported signals, in the absence of measurement noise, this new setup admits a unique solution. Next,we present a statistical analysis of vectorial phase retrieval and derive a computationally efficient algorithm to solve it. Finally,we illustrate via simulations, that our algorithm can accurately reconstruct signals even at considerable noise levels.

Estimating Sparse Precision Matrix: Optimal Rates of Convergence and Adaptive Estimation by Tony Cai, Weidong Liu and Harrison Zhou
Abstract: Precision matrix is of significant importance in a wide range of applications in multivariate analysis. This paper considers adaptive minimax estimation of sparse precision matrices in the high dimensional setting. Optimal rates of convergence are established for a range of matrix norm losses. A fully data driven estimator based on adaptive constrained l1 minimization is proposed and its rate of convergence is obtained over a collection of parameter spaces. The estimator, called ACLIME, is easy to implement and performs well numerically.

Abstract: Detection of sparse signals arises in a wide range of modern scientific studies. The focus so far has been mainly on Gaussian mixture models. In this paper, we consider the detection problem under a general sparse mixture model and obtain an explicit expression for the detection boundary. It is shown that the fundamental limits of detection is governed by the behavior of the log-likelihood ratio evaluated at an appropriate quantile of the null distribution. We also establish the adaptive optimality of the higher criticism procedure across all sparse mixtures satisfying certain mild regularity conditions. In particular, the general results obtained in this paper recover and extend in a unified manner the previously known results on sparse detection far beyond the conventional Gaussian model and other exponential families.
Phase retrieval combined with digital holography by Eliyahu Osherovich, Michael Zibulevsky, and Irad Yavneh

We present a new method for real- and complex-valued image reconstruction from two intensity measurements made in the Fourier plane: the Fourier magnitude of the unknown image, and the intensity of the interference pattern arising from superimposition of the original signal with a reference beam. This approach can provide significant advantages in digital holography since it poses less stringent requirements on the reference beam. In particular, it does not require spatial separation between the sought signal and the reference beam. Moreover, the reference beam need not be known precisely, and in fact, may contain severe errors, without leading to a deterioration in the reconstruction quality. Numerical simulations are presented to demonstrate the speed and quality of reconstruction.
Tensor Principal Component Analysis via Convex Optimization by Bo Jiang, Shiqian Ma, Shuzhong Zhang
Abstract: This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rank-one solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well.

Compressed sensing (CS) theory promises one can recover real-valued sparse signal from a small number of linear measurements. Motivated by network monitoring with link failures, we for the first time consider the problem of recovering signals that contain both real-valued entries and corruptions, where the real entries represent transmission delays on normal links and the corruptions represent failed links. Unlike conventional CS, here a measurement is real-valued only if it does not include a failed link, and it is corrupted otherwise. We prove that O((d + 1) max(d, k) logn) nonadaptive measurements are enough to recover all n-dimensional signals that contain k nonzero real entries and d corruptions. We provide explicit constructions of measurements and recovery algorithms. We also analyze the performance of signal recovery when the measurements contain errors.
Energy-Aware Design of Compressed Sensing Systems for Wireless Sensors Under Performance and Reliability Constraints Fred Chen, Fabian Lim, Omid Abari, Anantha Chandrakasan, Fellow, IEEE, and Vladimir Stojanović.
Abstract—This paper describes the system design of a compressed sensing (CS) based source encoding system for data compression in wireless sensor applications. We examine the
trade-off between the required transmission energy (compression performance) and desired recovered signal quality in the presence of practical non-idealities such as quantization noise, input signal noise and channel errors. The end-to-end system evaluation framework was designed to analyze CS performance under practical sensor settings. The evaluation shows that CS compression can enable over 10X in transmission energy savings while preserving the recovered signal quality to roughly 8 bits of precision. We further present low complexity error control schemes tailored to CS that further reduce the energy costs by 4X as well as diversity scheme to protect against burst errors. Results on a real electrocardiography (EKG) signal demonstrate 10X in energy reduction and corroborate the system analysis
Megathrust earthquakes rupture a broad zone of the subducting plate interface in both along-strike and along-dip directions. The along-dip rupture characteristics of megathrust events, e.g., their slip and energy radiation distribution, reflect depth-varying frictional properties of the slab interface. Here, we report high-resolution frequency-dependent seismic radiation of the four largest megathrust earthquakes in the past 10 y using a compressive-sensing (sparse source recovery) technique, resolving generally low-frequency radiation closer to the trench at shallower depths and high-frequency radiation farther from the trench at greater depths. Together with coseismic slip models and early aftershock locations, our results suggest depth-varying frictional properties at the subducting plate interfaces. The shallower portion of the slab interface (above ∼15 km) is frictionally stable or conditionally stable and is the source region for tsunami earthquakes with large coseismic slip, deficient high-frequency radiation, and few early aftershocks. The slab interface at intermediate depths (∼15–35 km) is the main unstable seismogenic zone for the nucleation of megathrust quakes, typically with large coseismic slip, abundant early aftershocks, and intermediate- to high-frequency radiation. The deeper portion of the slab interface (∼35–45 km) is seismically unstable, however with small coseismic slip, dominant high-frequency radiation, and relatively fewer aftershocks.

Iterative algorithms are now widely used in all areas of signal processing and digital communications. In modern communication systems, iterative algorithms are used for decoding low-density parity-check (LDPC) codes, a popular class of error-correction codes that are now widely used for their exceptional error-rate performance. In a more recent field known as compressed sensing, iterative algorithms are used as a method of reconstruction to recover a sparse signal from a linear set of measurements. This thesis primarily deals with the development of low-complexity iterative algorithms for the two aforementioned fields, namely, the design of low-complexity decoding algorithms for LDPC codes, and the development and analysis of a low complexity reconstruction algorithm called Interval-Passing Algorithm (IPA) for compressed sensing.In the first part of this thesis, we address the area of decoding algorithms for LDPC codes. It is well-known that LDPC codes suffer from the error floor phenomenon in spite of their exceptional performance, where traditional iterative decoders based on the belief propagation (BP) fail for certain low-noise configurations. Recently, a novel class of decoders called ''finite alphabet iterative decoders (FAIDs)'' were proposed that are capable of surpassing BP in the error floor at much lower complexity. In this work, we focus on the problem of selection of particularly good FAIDs for column-weight-three codes over the Binary Symmetric channel (BSC). Traditional methods for decoder selection use asymptotic techniques such as the density evolution method, which do not guarantee a good performance on finite-length codes especially in theerror floor region. Instead, we propose a methodology for selection that relies on the knowledge of potentially harmful topologies that could be present in a code, using the concept of noisy trapping set. Numerical results are provided to show that FAIDs selected based on our methodology outperform BP in the error floor on several codes.In the second part of this thesis, we address the area of iterative reconstruction algorithms for compressed sensing. Iterative algorithms have been proposed for compressed sensing in order to tackle the complexity of the LP reconstruction method. In this work, we modify and analyze a low complexity reconstruction algorithm called the IPA which uses sparse matrices as measurement matrices. Similar to what has been done for decoding algorithms in the area of coding theory, we analyze the failures of the IPA and link them to the stopping sets of the binary representation of the sparse measurement matrices used. The performance of the IPA makes it a good trade-off between the complex L1-minimization reconstruction and the very simple verification decoding.

Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models.

Abstract—In recent work, two different methods have been used to characterize the fundamental limits of compressed sensing. On the one hand are rigorous bounds based on informationtheoretic arguments or the analysis of specific algorithms. On the other hand are exact but heuristic predictions made using the replica method from statistical physics. In this paper, it is shown that, for certain problem settings, these bounds are in agreement, and thus provide a rigorous and accurate characterization of the compressed sensing problem. This characterization shows that the limits of sparse recovery can be quantified succinctly in terms of an effective signal-to-interference-plus-noise ratio, that depends on the number of measurements and the behavior of the sparse components themselves. Connections with the MMSE dimension by Wu and Verdu and minimax behavior of approximate message passing by Donoho et al. are discussed.

Curvelets as a sparse basis for compressed sensing magnetic resonance imaging by David S. Smith ; Lori R. Arlinghaus ; Thomas E. Yankeelov ; Edward B. Welch
We present an example of compressed sensing magnetic resonance imaging reconstruction where curvelets instead of wavelets provide a superior sparse basis when coupled to a group sparse representation of chemical exchange saturation transfer (CEST) imaging of the human breast. Taking a fully sampled CEST acquisition from a healthy volunteer, we retrospectively undersampled by a factor of four. We find that a group-sparse formulation of the reconstruction coupled with either Cohen-Daubechies-Feauveau 9/7 wavelets or curvelets provided superior results to a spatial-only regularized reconstruction. Between the group sparse reconstructions, the curvelet-regularized reconstruction outperformed the wavelet-regularized reconstruction. 
Previous approach for narrowband interference cancellation based on compressed sensing (CS) in wideband receivers uses orthogonal projections to project away from the interference. This is not effective in the presence of nonlinear LNA (low noise amplifier) and finite bit ADCs (analog-to-digital converters) due to the fact that the nonidealities present will result in irresolvable intermodulation components and corrupt the signal reconstruction. Cancelling out the interferer before reaching the LNA thus becomes very important. A CS measurement matrix with randomly placed zeros in the frequency domain helps in this regard by removing the effect of interference when the signal measurements are performed before the LNA. Using this idea, under much idealized hardware assumptions impressive performance is obtained. The use of binary sequences which makes the hardware implementation simplistic is investigated in this thesis. Searching sequences with many spectral nulls turns out to be nontrivial. A theoretical approach for estimating probability of nulls is provided to reduce significant computational effort in the search and is shown to be close to actual search iterations. The use of real binary sequences (generated using ideal switches) obtained through the search does not do better compared to the orthogonal projection method in the presence of nonlinear LNA.

Abstract. Compressive sensing (CS) theory has drawn great interest in recent years and has led to new image-acquisition techniques in many different fields. This research investigates a CS-based active underwater laser serial imaging system, which employs a spatial light modulator (SLM) at the source. A multiscale polarity-flipping measurement matrix and a model-assisted image reconstruction concept are proposed to address limitations imposed by a scattering medium. These concepts are also applicable to CS-based imaging in atmospheric environments characterized by fog, rain, or clouds. Simulation results comparing the performance of the proposed technique with that of traditional laser line scan (LLS) sensors and other structured illumination-based imager are analyzed. Experimental results from over-the-air and underwater tests are also presented. The potential for extending the proposed frame-based imaging technique to the traditional line-by-line scanning mode is discussed.

In a sensor network with remote sensor devices, it is important to have a method that can accurately localize a sound event with a small amount of data transmitted from the sensors. In this paper, we propose a novel method for localization of a sound source using compressive sensing. Instead of sampling a large amount of data at the Nyquist sampling rate in time domain, the acoustic sensors take compressive measurements integrated in time. The compressive measurements can be used to accurately compute the location of a sound source.
Dictionaries adapted to the data provide superior performance when compared to predefined dictionaries in applications involving sparse representations. Algorithmic stability and generalization are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representation of image patches, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-lines clustering, in order to learn a hierarchical dictionary with multiple levels. Furthermore, we propose a regularized pursuit scheme for computing sparse representations using a multilevel dictionary. Using simulations with natural image patches, we demonstrate the stability and generalization characteristics of the proposed algorithm. Experiments also show that improvements in denoising performance are obtained with multilevel dictionaries when compared to global K-SVD dictionaries. Furthermore, we propose a robust variant of multilevel learning for severe degradations that occur in applications like compressive sensing. Results with random projection-based compressive recovery show that the multilevel dictionary and its robust variant provide improved performances compared to a baseline K-SVD dictionary.
We consider the problem of recovering a block (or group) sparse signal from an underdetermined set of random linear measurements, which appear in compressed sensing applications such as radar and imaging. Recent results of Donoho, Johnstone, and Montanari have shown that approximate message passing (AMP) in combination with Stein's shrinkage outperforms group LASSO for large block sizes. In this paper, we prove that, for a fixed block size and in the strong undersampling regime (i.e., having very few measurements compared to the ambient dimension), AMP cannot improve upon group LASSO, thereby complementing the results of Donoho et al.
Group-based sparsity models are proven instrumental in linear regression problems for recovering signals from much fewer measurements than standard compressive sensing. The main promise of these models is the recovery of "interpretable" signals along with the identification of their constituent groups. To this end, we establish a combinatorial framework for group-model selection problems and highlight the underlying tractability issues revolving around such notions of interpretability when the regression matrix is simply the identity operator. We show that, in general, claims of correctly identifying the groups with convex relaxations would lead to polynomial time solution algorithms for a well-known NP-hard problem, called the weighted maximum cover problem. Instead, leveraging a graph-based understanding of group models, we describe group structures which enable correct model identification in polynomial time via dynamic programming. We also show that group structures that lead to totally unimodular constraints have tractable discrete as well as convex relaxations. Finally, we study the Pareto frontier of budgeted group-sparse approximations for the tree-based sparsity model of \cite{baraniuk2010model} and illustrate identification and computation trade-offs between our framework and the existing convex relaxations.
Multi-User Detection is fundamental not only to cellular wireless communication but also to Radio-Frequency Identification (RFID) technology that supports supply chain management. The challenge of Multi-user Detection (MUD) is that of demodulating mutually interfering signals, and the two biggest impediments are the asynchronous character of random access and the lack of channel state information. Given that at any time instant the number of active users is typically small, the promise of Compressive Sensing (CS) is the demodulation of sparse superpositions of signature waveforms from very few measurements. This paper begins by unifying two front-end architectures proposed for MUD by showing that both lead to the same discrete signal model. Algorithms are presented for coherent and noncoherent detection that are based on iterative matching pursuit. Noncoherent detection is all that is needed in the application to RFID technology where it is only the identity of the active users that is required. The coherent detector is also able to recover the transmitted symbols. It is shown that compressive demodulation requires $\mathcal{O}(K\log N(\tau+1))$ samples to recover $K$ active users whereas standard MUD requires $N(\tau+1)$ samples to process $N$ total users with a maximal delay $\tau$. Performance guarantees are derived for both coherent and noncoherent detection that are identical in the way they scale with number of active users. The power profile of the active users is shown to be less important than the SNR of the weakest user. Gabor frames and Kerdock codes are proposed as signature waveforms and numerical examples demonstrate the superior performance of Kerdock codes - the same probability of error with less than half the samples.
This paper considers the problem of compressive sensing over a finite alphabet, where the finite alphabet may be inherent to the nature of the data or a result of quantization. There are multiple examples of finite alphabet based static as well as time-series data with inherent sparse structure; and quantizing real values is an essential step while handling real data in practice. We show that there are significant benefits to analyzing the problem while incorporating its finite alphabet nature, versus ignoring it and employing a conventional real alphabet based toolbox. Specifically, when the alphabet is finite, our techniques (a) have a lower sample complexity compared to real-valued compressive sensing for sparsity levels below a threshold; (b) facilitate constructive designs of sensing matrices based on coding-theoretic techniques; (c) enable one to solve the exact $\ell_0$-minimization problem in polynomial time rather than a approach of convex relaxation followed by sufficient conditions for when the relaxation matches the original problem; and finally, (d) allow for smaller amount of data storage (in bits).
Dynamic Filtering of Sparse Signals using Reweighted `1 by Adam S. Charles,  and Christopher J. Rozell.
Accurate estimation of undersampled time-varying signals improves as stronger signal models provide more information to aid the estimator. In class Kalman filter-type algorithms, dynamic models of signal evolution are highly leveraged but there is little exploitation of structure within a signal at a given time. In contrast, standard sparse approximation schemes (e.g., L1 minimization) utilize strong structural models for a single signal, but do not admit obvious ways to incorporate dynamic models for data streams. In this work we introduce a causal estimation algorithm to estimate time-varying sparse signals. This algorithm is based on a hierarchical probabilistic model that uses re-weighted L1 minimization as its core computation, and propagates second order statistics through time similar to classic Kalman filtering. The resulting algorithm achieves very good performance, and appears to be particularly robust to errors in the dynamic signal model.

Optical-resolution photoacoustic microscopy (OR-PAM) is becoming a vital tool for studying the microcirculation system in vivo. By increasing the numerical aperture of optical focusing, the lateral resolution of OR-PAM can be improved; however, the depth of focus and thus the imaging range will be sacrificed correspondingly. In this work, we report our development of blind-deconvolution optical-resolution photoacoustic microscopy (BD-PAM) that can provide a lateral resolution ~2-fold finer than that of conventional OR-PAM (3.04 vs. 5.78μm), without physically increasing the system’s numerical aperture. The improvement achieved with BD-PAM is demonstrated by imaging graphene nanoparticles and the microvasculature of mice ears in vivo. Our results suggest that BD-PAM may become a valuable tool for many biomedical applications that require both fine spatial resolution and extended depth of focus.
A FUNDAMENTAL PITFALL IN BLIND DECONVOLUTION WITH SPARSE AND SHIFT-INVARIANT PRIORS by Alexis Benichoux, Emmanuel Vincent, Remi Gribonval. The abstarct reads:
We consider the problem of blind sparse deconvolution, which is common in both image and signal processing. To counter-balance the ill-posedness of the problem, many approaches are based on the minimization of a cost function. A well-known issue is a tendency to converge to an undesirable trivial solution. Besides domain specific explanations (such as the nature of the spectrum of the blurring filter in image processing) a widespread intuition behind this phenomenon is related to scaling issues and the nonconvexity of the optimized cost function. We prove that a fundamental issue lies in fact in the intrinsic properties of the cost function itself: for a large family of shift-invariant cost functions promoting the sparsity of either the filter or the source, the only global minima are trivial. We complete the analysis with an empirical method to verify the existence of more useful local minima.

We present a novel multiresolution compression-domain GPU volume rendering architecture designed for interactive local and networked exploration of rectilinear scalar volumes on commodity platforms. In our approach, the volume is decomposed into a multiresolution hierarchy of bricks. Each brick is further subdivided into smaller blocks, which are compactly described by sparse linear combinations of prototype blocks stored in an overcomplete dictionary. The dictionary is learned, using limited computational and memory resources, by applying the K-SVD algorithm to a re-weighted non-uniformly sampled subset of the input volume, harnessing the recently introduced method of coresets. The result is a scalable high quality coding scheme, which allows very large volumes to be compressed off-line and then decom pressed on-demand during real-time GPU-accelerated rendering. Volumetric information can be maintained in compressed format through all the rendering pipeline. In order to efficiently support high quality filtering and shading, a specialized real-time renderer closely coordinates decompression with rendering, combining at each frame images produced by raycasting selectively decompressed portions of the current view- and transfer-function-dependent working set. The quality and performance of our approach is demonstrated on massive static and time-varying datasets.
Spatial Compressive Sensing in MIMO Radar with Random Arrays by Marco Rossi, Alexander M. Haimovich, and Yonina C. Eldar
Abstract—We study compressive sensing in the spatial domain for target localization using MIMO radar. By leveraging a joint sparse representation, we extend the single-pulse framework proposed in [1] to a multi-pulse one. For this scenario, we devise a tree-based matching pursuit algorithm to solve the nonconvex localization problem. It is shown that this method can achieve high resolution target localization with a highly undersampled MIMO radar with transmit/receive elements placed at random. Moreover, a lower bound is developed on the number of transmit/receive elements required to ensure accurate target localization with high probability
Compressive Sensing with Unknown Parameters by Marco Rossi, Alexander M. Haimovich, and Yonina C. Eldar
Abstract—This work addresses target detection from a set of compressive sensing radar measurements corrupted by additive white Gaussian noise. In previous work, we studied target localization using compressive sensing in the spatial domain, i.e., the use of an undersampled MIMO radar array, and proposed the Multi-Branch Matching Pursuit (MBMP) algorithm, which requires knowledge of the number of targets. Generalizing the MBMP algorithm, we propose a framework for target detection, which has several important advantages over previous methods: (i) it is fully adaptive; (ii) it addresses the general multiple measurement vector (MMV) setting; (iii) it provides a finite data records analysis of false alarm and detection probabilities, which holds for any measurement matrix. Using numerical simulations, we show that the proposed algorithm is competitive with respect to state-of-the-art compressive sensing algorithms for target detection.
This paper presents a novel texture synthesis algorithm that performs a sparse expansion of the patches of the image in a dictionary learned from an input exemplar. The synthesized texture is computed through the minimization of a non-convex energy that takes into account several constraints. Our first contribution is the computation of a sparse expansion of the patches imposing that the dictionary atoms are used in the same proportions as in the exemplar. This is crucial to enable a fair representation of the features of the input image during the synthesis process. Our second contribution is the use of additional penalty terms in the variational formulation to maintain the histogram and the low frequency content of the input. Lastly we introduce a non-linear re construction process that stitches together patches without introducing blur. Numerical results illustrate the importance of each of these contributions to achieve state of the art texture synthesis.

Image Credit: NASA/JPL/Space Science Institute

N00215993.jpg was taken on August 30, 2013 and received on Earth August 31, 2013. The camera was pointing toward IAPETUS at approximately 1,548,304 miles (2,491,753 kilometers) away, and the image was taken using the CL1 and GRN filters. This image has not been validated or calibrated. 

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments: