We have a slew of pre-prints and papers on both compressive sensing and advanced matrix factorization, enjoy the week-end:

Overview of compressive sensing techniques applied in holography Yair Rivenson, Adrian Stern, and Bahram Javidi

Overview of compressive sensing techniques applied in holography Yair Rivenson, Adrian Stern, and Bahram Javidi

In recent years compressive sensing (CS) has been successfully introduced in digital holography (DH). Depending on the ability to sparsely represent an object, the CS paradigm provides an accurate object reconstruction framework from a relatively small number of encoded signal samples. DH has proven to be an efficient and physically realizable sensing modality that can exploit the benefits of CS. In this paper, we provide an overview of the theoretical guidelines for application of CS in DH and demonstrate the benefits of compressive digital holographic sensing.

A Frechet Mean Approach for Compressive Sensing ´Date Acquisition and Reconstruction in Wireless Sensor Networks by Wei Chen, Miguel R. D. Rodrigues, and Ian J. Wassell

Abstract—Compressive sensing leverages the compressibility of natural signals to trade off the convenience of data acquisition against computational complexity of data reconstruction. Thus, CS appears to be an excellent technique for data acquisition and reconstruction in a wireless sensor network (WSN) which typically employs a smart fusion center (FC) with a high computational capability and several dumb front-end sensors having limited energy storage. This paper presents a novel signal reconstruction method based on CS principles for applications in WSNs. The proposed method exploits both the intra-sensor and inter-sensor correlation to reduce the number of samples required for reconstruction of the original signals. The novelty of the method relates to the use of the Frechet mean of the signals as an estimate of their sparse representations in some basis. This crude estimate of the sparse representation is then utilized in an enhanced data recovering convex algorithm, i.e., the penalized ℓ1 minimization, and an enhanced data recovering greedy algorithm, i.e., the precognition matching pursuit (PMP). The superior reconstruction quality of the proposed method is demonstrated by using data gathered by a WSN located in the Intel Berkeley Research lab.

On the Performance Bound of Sparse Estimation with Sensing Matrix Perturbation by Yujie Tang, Laming Chen, Yuantao Gu. The abstract reads:

This paper focusses on the sparse estimation in the situation where both the the sensing matrix and the measurement vector are corrupted by additive Gaussian noises. The performance bound of sparse estimation is analyzed and discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It is shown that the situation with sensing matrix perturbation is more complex than the one with only measurement noise. For the CCRB, its closed-form expression is deduced. It demonstrates a gap between the maximal and nonmaximal support cases. It is also revealed that a gap lies between the CCRB and the MSE of the oracle pseudoinverse estimator, but it approaches zero asymptotically when the problem dimensions tend to infinity. For a tighter bound, the HCRB, despite of the difficulty in obtaining a simple expression for general sensing matrix, a closed-form expression in the unit sensing matrix case is derived for a qualitative study of the performance bound. It is shown that the gap between the maximal and nonmaximal cases is eliminated for the HCRB. Numerical simulations are performed to verify the theoretical results in this paper.

Generalized Distributed Compressive Sensing by Jeonghun Park, Seunggye Hwang, Janghoon Yang, Dongku Kim. The abstract reads:

Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information \cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.

Cooperative Sparsity Pattern Recovery in Distributed Networks Via Distributed-OMP by Thakshila Wimalajeewa, Pramod K. Varshney. The abstract reads:

In this paper, we consider the problem of collaboratively estimating the sparsity pattern of a sparse signal with multiple measurement data in distributed networks. We assume that each node makes Compressive Sensing (CS) based measurements via random projections regarding the same sparse signal. We propose a distributed greedy algorithm based on Orthogonal Matching Pursuit (OMP), in which the sparse support is estimated iteratively while fusing indices estimated at distributed nodes. In the proposed distributed framework, each node has to perform less number of iterations of OMP compared to the sparsity index of the sparse signal. Thus, with each node having a very small number of compressive measurements, a significant performance gain in support recovery is achieved via the proposed collaborative scheme compared to the case where each node estimates the sparsity pattern independently and then fusion is performed to get a global estimate. We further extend the algorithm to estimate the sparsity pattern in a binary hypothesis testing framework, where the algorithm first detects the presence of a sparse signal collaborating among nodes with a fewer number of iterations of OMP and then increases the number of iterations to estimate the sparsity pattern only if the signal is detected.

Approximate Projected Generalized Gradient Methods with Sparsity-inducing Penalties by Laming Chen, Yuantao Gu. The abstract reads:

Projected gradient (or subgradient) methods are simple and typical approaches to minimize a convex function with constraints. In the area of sparse recovery, however, plenty research results reveal that non-convex penalties induce better sparsity than convex ones. In this paper, the idea of projected subgradient methods is extended to minimize a class of sparsity-inducing penalties with linear constraints, and these penalties are not necessarily convex. To make the algorithm computationally tractable for large scale problems, a uniform approximate projection is applied in the projection step. The theoretical convergence analysis of the proposed method, approximate projected generalized gradient (APGG) method, is provided in the noisy scenario. The result reveals that if the initial solution satisfies some certain requirements, the bound of the recovery error is linear in both the noise term and the step size of APGG. In addition, the parameter selection rules and the initial criteria are analyzed. If the approximate least squares solution is adopted as the initial one, the result reveals how non-convex the penalty could be to guarantee convergence to the global optimal solution. Numerical simulations are performed to test the performance of the proposed method and verify the theoretical analysis. Contributions of this paper are compared with some existing results in the end.

Coherence-based Partial Exact Recovery Condition for OMP/OLS by Cedric Herzet, Charles Soussen, Jerome Idier, Remi Gribonval.

We address the exact recovery of the support of a k-sparse vector with Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a noiseless setting. We consider the scenario where OMP/OLS have selected good atoms during the first l iterations (l lt k) and derive a new sufﬁcient and worst-case necessary condition for their success in k steps. Our result is based on the coherence µ of the dictionary and relaxes Tropp’s well-known condition µ less than 1/(2k − 1) to the case where OMP/OLS have a partial knowledge of the support.

In this paper a new result of recovery of sparse vectors from deterministic and noisy measurements by l1 minimization is given. The sparse vector is randomly chosen and follows a generic p-sparse model introduced by Candes and al. The main theorem ensures consistency of l1 minimization with high probability. This first result is secondly extended to compressible vectors.

Compressive Schlieren Deflectometry by Prasad Sudhakar, Laurent Jacques, Xavier Dubois, Philippe Antoine, Luc Joannes

Schlieren deflectometry aims at characterizing the deflections undergone by refracted incident light rays at any surface point of a transparent object. For smooth surfaces, each surface location is actually associated with a sparse deflection map (or spectrum). This paper presents a novel method to compressively acquire and reconstruct such spectra. This is achieved by altering the way deflection information is captured in a common Schlieren Deflectometer, i.e., the deflection spectra are indirectly observed by the principle of spread spectrum compressed sensing. These observations are realized optically using a 2-D Spatial Light Modulator (SLM) adjusted to the corresponding sensing basis and whose modulations encode the light deviation subsequently recorded by a CCD camera. The efficiency of this approach is demonstrated experimentally on the observation of few test objects. Further, using a simple parametrization of the deflection spectra we show that relevant key parameters can be directly computed using the measurements, avoiding full reconstruction.

Low-rank Matrix Completion using Alternating Minimization by Prateek Jain, Praneeth Netrapalli, Sujay Sanghavi

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge.

In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result.

In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.

Sparse and Optimal Acquisition Design for Diffusion MRI and Beyond by Cheng Guan Koay, Evren Özarslan, Kevin M Johnson, M. Elizabeth Meyerand

The focus of this paper is on the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. A novel optimality criterion is proposed for sparse multiple-shell acquisition and quasi multiple-shell designs in diffusion MRI and a novel and effective semi-stochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general---e.g., in the order of 10^{232} for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI).

Few-view single photon emission computed tomography (SPECT) reconstruction based on a blurred piecewise constant object model by Paul A Wolf, Jakob H Jørgensen, Taly G Schmidt, Emil Y Sidky

A sparsity-exploiting algorithm intended for few-view Single Photon Emission Computed Tomography (SPECT) reconstruction is proposed and characterized. The algorithm models the object as piecewise constant subject to a blurring operation. To validate that the algorithm closely approximates the true object in the noiseless case, projection data were generated from an object assuming this model and using the system matrix. Monte Carlo simulations were performed to provide more realistic data of a phantom with varying smoothness across the field of view. Reconstructions were performed across a sweep of two primary design parameters. The results demonstrate that the algorithm recovers the object in a noiseless simulation case. While the algorithm assumes a specific blurring model, the results suggest that the algorithm may provide high reconstruction accuracy even when the object does not match the assumed blurring model. Generally, increased values of the blurring parameter and TV weighting parameters reduced noise and streaking artifacts, while decreasing spatial resolution. As the number of views decreased from 60 to 9 the accuracy of images reconstructed using the proposed algorithm varied by less than 3%. Overall, the results demonstrate preliminary feasibility of a sparsity-exploiting reconstruction algorithm which may be beneficial for few-view SPECT.

Toeplitz Matrix Based Sparse Error Correction in System Identification: Outliers and Random Noises by Weiyu Xu, Er-Wei Bai, Myung Cho

In this paper, we consider robust system identification under sparse outliers and random noises. In our problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured real-numbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are also established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0.

Sparse seismic imaging using variable projection by Aleksandr Y. Aravkin, Tristan van Leeuwen, Ning Tu

We consider an important class of signal processing problems where the signal of interest is known to be sparse, and can be recovered from data given auxiliary information about how the data was generated. For example, a sparse Green's function may be recovered from seismic experimental data using sparsity optimization when the source signature is known. Unfortunately, in practice this information is often missing, and must be recovered from data along with the signal using deconvolution techniques.

In this paper, we present a novel methodology to simultaneously solve for the sparse signal and auxiliary parameters using a recently proposed variable projection technique. Our main contribution is to combine variable projection with sparsity promoting optimization, obtaining an efficient algorithm for large-scale sparse deconvolution problems. We demonstrate the algorithm on a seismic imaging example.

We consider a reconciliation problem, where two hosts wish to synchronize their respective sets. Efficient solutions for minimizing the communication cost between the two hosts have been previously proposed in the literature. However, they rely on prior knowledge about the size of the set differences between the two sets to be reconciled. In this paper, we propose a method which can achieve comparable efficiency without assuming this prior knowledge. Our method uses compressive sensing techniques which can leverage the expected sparsity in set differences. We study the performance of the method via theoretical analysis and numerical simulations.

Compressed Sensing Recoverability In Imaging Modalities by Mahdi S. Hosseini, Konstantinos N. Plataniotis

The paper introduces a framework for the recoverability analysis in compressive sensing for imaging applications such as CI cameras, rapid MRI and coded apertures. This is done using the fact that the Spherical Section Property (SSP) of a sensing matrix provides a lower bound for unique sparse recovery condition. The lower bound is evaluated for different sampling paradigms adopted from the aforementioned imaging modalities. In particular, a platform is provided to analyze the well-posedness of sub-sampling patterns commonly used in practical scenarios. The effectiveness of the various designed patterns for sparse image recovery is studied through numerical experiments.

Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding by Ramji Venkataramanan, Tuhin Sarkar, Sekhar Tatikonda

We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. The algorithm is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off is: computational resource (space or time) per source sample of O((n/log n)^2) and probability of excess distortion decaying exponentially in n/log n, where n is the block length. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has very good empirical performance, especially at low and moderate rates.

Low Rank Mechanism for Optimizing Batch Queries under Differential Privacy by Ganzhao Yuan, Zhenjie Zhang, Marianne Winslett, Xiaokui Xiao, Yin Yang, Zhifeng Hao

Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains.

Motivated by this, we propose the \emph{Low-Rank Mechanism} (LRM), the first practical differentially private technique for answering batch queries with high accuracy, based on a \emph{low rank approximation} of the workload matrix. We prove that the accuracy provided by LRM is close to the theoretical lower bound for any mechanism to answer a batch of queries under differential privacy. Extensive experiments using real data demonstrate that LRM consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.

This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rank-one solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well.

Chaotic Analog-to-Information Conversion: Principle and Reconstructability with Parameter Identifiability by Feng Xi, Sheng Yao Chen, Zhong Liu

This paper proposes a chaos-based analog-to-information conversion system for the acquisition and reconstruction of sparse analog signals. The sparse signal acts as an excitation term of a continuous-time chaotic system and the compressive measurements are performed by sampling chaotic system outputs. The reconstruction is realized through the estimation of the sparse coefficients with principle of chaotic parameter estimation. With the deterministic formulation, the reconstructability analysis is conducted via the sensitivity matrix from the parameter identifiability of chaotic systems. For the sparsity-regularized nonlinear least squares estimation, it is shown that the sparse signal is locally reconstructable if the columns of the sparsity-regularized sensitivity matrix are linearly independent. A Lorenz system excited by the sparse multitone signal is taken as an example to illustrate the principle and the performance.

Smoothed analysis of symmetric random matrices with continuous distributions by Brendan Farrell, Roman Vershynin

We study invertibility of matrices of the form $D+R$ where $D$ is an arbitrary symmetric deterministic matrix, and $R$ is a symmetric random matrix whose independent entries have continuous distributions with bounded densities. We show that $|(D+R)^{-1}| = O(n^2)$ with high probability. The bound is completely independent of $D$. No moment assumptions are placed on $R$; in particular the entries of $R$ can be arbitrarily heavy-tailed.

The task of compressed sensing is to recover a sparse vector from a small number of linear and non-adaptive measurements, and the problem of finding a suitable measurement matrix is very important in this field. While most recent works focused on random matrices with entries drawn independently from certain probability distributions, in this paper we show that a partial random symmetric Bernoulli matrix whose entries are not independent, can be used to recover signal from observations successfully with high probability. The experimental results also show that the proposed matrix is a suitable measurement matrix.

New and Improved Conditions for Uniqueness of Sparsest Solutions of Underdetermined Linear Systems by Yun-Bin Zhao

The uniqueness of sparsest solutions of underdetermined linear systems plays a fundamental role in the newly developed compressed sensing theory. Several new algebraic concepts, including the sub-mutual coherence, scaled mutual coherence, coherence rank, and sub-coherence rank, are introduced in this paper in order to develop new and improved sufficient conditions for the uniqueness of sparsest solutions. The coherence rank of a matrix with normalized columns is the maximum number of absolute entries in a row of its Gram matrix that are equal to the mutual coherence. The main result of this paper claims that when the coherence rank of a matrix is low, the mutual-coherence-based uniqueness conditions for the sparsest solution of a linear system can be improved. Furthermore, we prove that the Babel-function-based uniqueness can be also improved by the so-called sub-Babel function. Moreover, we show that the scaled-coherence-based uniqueness conditions can be developed, and that the right-hand-side vector $b$ of a linear system, the support overlap of solutions, the orthogonal matrix out of the singular value decomposition of a matrix, and the range property of a transposed matrix can be also integrated into the criteria for the uniqueness of the sparsest solution of an underdetermined linear system.

A Multi-View Embedding Space for Modeling Internet Images, Tags, and their Semantics by Yunchao Gong, Qifa Ke, Michael Isard, Svetlana Lazebnik

This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.

In this article, we present an up-to-date overview of the potential biomedical applications of sodium MRI in vivo. Sodium MRI is a subject of increasing interest in translational research as it can give some direct and quantitative biochemical information on the tissue viability, cell integrity and function, and therefore not only help the diagnosis but also the prognosis of diseases and treatment outcomes. It has already been applied in vivo in most of human tissues, such as brain for stroke or tumor detection and therapeutic response, in breast cancer, in articular cartilage, in muscle and in kidney, and it was shown in some studies that it could provide very useful new information not available through standard proton MRI. However, this technique is still very challenging due to the low detectable sodium signal in biological tissue with MRI and hardware/software limitations of the clinical scanners. The article is divided in three parts: (1) the role of sodium in biological tissues, (2) a short review on sodium magnetic resonance, and (3) a review of some studies on sodium MRI on different organs/diseases to date.

Sparse Dynamics for Partial Differential Equations by Hayden Schaeffer, Stanley Osher, Russel Caflisch, Cory Hauck

We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high frequency source terms.

Can compression algorithms be employed for recovering signals from their underdetermined set of linear measurements? Addressing this question is the first step towards applying compression algorithms to compressed sensing (CS). In this paper, we consider a family of compression algorithms $\mathcal{C}_R$, parametrized by rate $R$, for a compact class of signals $\mathcal{Q} \subset \mathds{R}^n$. The set of natural images and JPEG2000 at different rates are examples of $\mathcal{Q}$ and $\mathcal{C}_R$, respectively. We establish a connection between the rate-distortion performance of $\mathcal{C}_R$, and the number of linear measurement required for successful recovery in CS. We then propose compressible signal pursuit (CSP) algorithm and prove that, with high probability, it accurately and robustly recovers signals from an underdetermined set of linear measurements. We also explore the performance of CSP in the recovery of infinite dimensional signals. Exploring approximations or simplifications of CSP, which is computationally demanding, is left for the future research.

Applying full polarization A-Projection to very wide field of view instruments: An imager for LOFAR by C. Tasse, B. van der Tol, J. van Zwieten, Ger van Diepen, S. Bhatnagar

The aimed high sensitivities and large fields of view of the new generation of interferometers impose to reach high dynamic range of order $\sim$1:$10^6$ to 1:$10^8$ in the case of the Square Kilometer Array. The main problem is the calibration and correction of the Direction Dependent Effects (DDE) that can affect the electro-magnetic field (antenna beams, ionosphere, Faraday rotation, etc.). As shown earlier the A-Projection is a fast and accurate algorithm that can potentially correct for any given DDE in the imaging step. With its very wide field of view, low operating frequency ($\sim30-250$ MHz), long baselines, and complex station-dependent beam patterns, the Low Frequency Array (LOFAR) is certainly the most complex SKA precursor. In this paper we present a few implementations of A-Projection applied to LOFAR that can deal with non-unitary station beams and non-diagonal Mueller matrices. The algorithm is designed to correct for all the DDE, including individual antenna, projection of the dipoles on the sky, beam forming and ionospheric effects. We describe a few important algorithmic optimizations related to LOFAR's architecture allowing us to build a fast imager. Based on simulated datasets we show that A-Projection can give dramatic dynamic range improvement for both phased array beams and ionospheric effects. We will use this algorithm for the construction of the deepest extragalactic surveys, comprising hundreds of days of integration.

The Scale of the Problem : Recovering Images of Reionization with GMCA by Emma Chapman, Filipe B. Abdalla, J. Bobin, J.-L. Starck, Geraint Harker, Vibor Jelic, Panagiotis Labropoulos, Saleem Zaroubi, Michiel A. Brentjens, A. G. de Bruyn, L. V. E. Koopmans

The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis or GMCA, to simulated LOFAR-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that GMCA preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the GMCA output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the central 50% of the maps, these coefficients improve to 0.905 and 0.605 respectively and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing.

We propose an efficient strategy to infer sparse Hopfield network based on the magnetizations and pairwise correlations measured through Glauber samplings. This strategy incorporates the $\ell_{1}$ regularization into the Bethe approximation, and is able to further reduce the inference error of the Bethe approximation without the regularization. The optimal regularization parameter is observed to be of the order of $M^{-1/2}$ where $M$ is the number of independent samples.

Distributed Sparse Signal Recovery For Sensor Networks by Stacy Patterson, Yonina C. Eldar, Idit Keidar

We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and with low computational complexity. A naive distributed implementation of IHT would require global communication of every agent's full state in each iteration. We find that we can dramatically reduce this communication cost by leveraging solutions to the distributed top-K problem in the database literature. Evaluations show that our algorithm requires up to three orders of magnitude less total bandwidth than the best-known distributed basis pursuit method.

Sonography techniques use multiple transducer elements for tissue visualization. Signals detected at each element are sampled prior to digital beamforming. The required sampling rates are up to 4 times the Nyquist rate of the signal and result in considerable amount of data, that needs to be stored and processed. A developed technique, based on the finite rate of innovation model, compressed sensing (CS) and Xampling ideas, allows to reduce the number of samples needed to reconstruct an image comprised of strong reflectors. A significant drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here we build on previous work and show explicitly how to perform beamforming in the Fourier domain. Beamforming in frequency exploits the low bandwidth of the beamformed signal and allows to bypass the oversampling dictated by digital implementation of beamforming in time. We show that this allows to obtain the same beamformed image as in standard beamforming but from far fewer samples. Finally, we present an analysis based CS-technique that allows for further reduction in sampling rate, using only a portion of the beamformed signal's bandwidth, namely, sampling the signal at sub-Nyquist rates. We demonstrate our methods on in vivo cardiac ultrasound data and show that reductions up to 1/25 over standard beamforming rates are possible.

REGULARIZED BAYESIAN COMPRESSED SENSING IN ULTRASOUND IMAGING by Nicolas Dobigeon, Adrian Basarab, Denis Kouame´ and Jean-Yves Tourneret.

Compressed sensing has recently shown much interest for ultrasound imaging. In particular, exploiting the sparsity of ultrasound images in the frequency domain, a speciﬁc random sampling of ultrasound images can be used advantageously for designing efﬁcient Bayesian image reconstruction methods. We showed in a previous work that assigning independent Bernoulli Gaussian priors to the ultrasound image in the frequency domain provides Bayesian reconstruction errors similar to a classical ℓ1 minimization technique. However, the advantage of Bayesian methods is to estimate the sparsity level of the image by using a hierarchical algorithm. This paper goes a step further by exploiting the spatial correlations between the image pixels in the frequency domain. A new Bayesian model based on a correlated Bernoulli Gaussian model is proposed for that purpose. The parameters of this model can be estimated by sampling the corresponding posterior distribution using an MCMC method. The resulting algorithm provides very low reconstruction errors even when reducing signiﬁcantly the number of measurements via random sampling.

The degrees of freedom of the Group Lasso for a General Design by Samuel Vaiter, Charles Deledalle, Gabriel Peyré, Jalal Fadili, Charles Dossal

In this paper, we are concerned with regression problems where covariates can be grouped in nonoverlapping blocks, and where only a few of them are assumed to be active. In such a situation, the group Lasso is an at- tractive method for variable selection since it promotes sparsity of the groups. We study the sensitivity of any group Lasso solution to the observations and provide its precise local parameterization. When the noise is Gaussian, this allows us to derive an unbiased estimator of the degrees of freedom of the group Lasso. This result holds true for any fixed design, no matter whether it is under- or overdetermined. With these results at hand, various model selec- tion criteria, such as the Stein Unbiased Risk Estimator (SURE), are readily available which can provide an objectively guided choice of the optimal group Lasso fit.

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment