Monday, December 31, 2012

A gut feeling review of 2012.



In 2012, we saw some trends:

Finally, with Cable we had fun with the CAI series and expect to continue it into 2013. Like the pirates sailing from one island of knowledge to the other, forward we go. Happy New Year 2013 to y'all.




Image Credit: NASA/JPL/Space Science Institute
N00199979.jpg was taken on December 30, 2012 and received on Earth December 31, 2012. The camera was pointing toward SATURN-ERING at approximately 1,048,438 miles (1,687,298 kilometers) away, and the image was taken using the CL1 and CL2 filters.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Nuit Blanche in Review (December 2012 edition)

Since the last Nuit Blanche in Review, we had a few implementations made available (several in Python, is this a trend ?)

We also had a Q&A: 
several questions:
several themes:

some items from conferences:

Sunday, December 30, 2012

Sunday Morning Insight: The extreme paucity of tools for blind deconvolution of biochemical networks


When watching the presentation on the Genetics of Parkinson's Disease by Ellen Sidransky featured yesterday. I could not but notice a few other things on top of yesterday's question and meta-question.  

For one, it is abundantly clear that it is not because a metabolic circuit becomes disrupted , thanks to the lack of a certain gene, that it necessarily implies a large physiological change (also see [1]). Let us note that the robustness in random processes observed in Nature parallels some of the information preserving capacity in compressive sensing with random matrices. When people find that a specific gene is responsible for one specific disease, it really means that the culpable gene is a single point of failure. But what about those, probably more important, cases when no specific gene is a single point of failure, how do we investigate those cases in some automatized fashion (instead of just relying on chance as we have done so far) ?




Or let me put it in a different perspective that is familiar to readers of Nuit Blanche: What measurements are required for the blind deconvolution of these metabolic circuits ? I note the paucity of studies in that regard, see for instance Reverse Engineering Biochemical Networks and Compressive Sensing, It's quite simply, the stuff of Life... and Instances of Null Spaces: Can Compressive Sensing Help Study Non Steady State Metabolic Networks ?.








In the post-screening analysis, a natural extension of compressive sensing could include a better sensor and better image reconstruction for confocal imaging but I wonder if group testing and other known devices could help the other processes like it did for High Throughput Testing ?




[1] In Properties of Metabolic Networks: Structure versus Function by R. Mahadevan and B. O. Palsson on can read:
It is observed that, functionally speaking, the essentiality of reactions in a node is not correlated with node connectivity as structural analyses of other biological networks have suggested.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, December 29, 2012

The Genetics of Parkinson's Disease: A question and a meta-question


Caught this very interesting presentation on the Genetics of Parkinson's Disease by Ellen Sidransky and couldn't help but think of several issues including the most important one in my mind: How come GWAS studies on Parkinson's did not pick up on the GBA gene ? Also the mosaic at the very end reminded me of another one we mentioned about multiplexing happening with accidental cameras. Could Compressive Sensing that is concerned with multiplexing signals and finding the sparsest one, be helpful in the recognition and even classification of diseases ? 






Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, December 28, 2012

Compressive Sensing and Matrix Factorization This Month

We have a slew of pre-prints and papers on both compressive sensing and advanced matrix factorization, enjoy the week-end:

Overview of compressive sensing techniques applied in holography Yair Rivenson, Adrian Stern, and Bahram Javidi
In recent years compressive sensing (CS) has been successfully introduced in digital holography (DH). Depending on the ability to sparsely represent an object, the CS paradigm provides an accurate object reconstruction framework from a relatively small number of encoded signal samples. DH has proven to be an efficient and physically realizable sensing modality that can exploit the benefits of CS. In this paper, we provide an overview of the theoretical guidelines for application of CS in DH and demonstrate the benefits of compressive digital holographic sensing.


Abstract—Compressive sensing leverages the compressibility of natural signals to trade off the convenience of data acquisition against computational complexity of data reconstruction. Thus, CS appears to be an excellent technique for data acquisition and reconstruction in a wireless sensor network (WSN) which typically employs a smart fusion center (FC) with a high computational capability and several dumb front-end sensors having limited energy storage. This paper presents a novel signal reconstruction method based on CS principles for applications in WSNs. The proposed method exploits both the intra-sensor and inter-sensor correlation to reduce the number of samples required for reconstruction of the original signals. The novelty of the method relates to the use of the Frechet mean of the signals as an estimate of their sparse representations in some basis. This crude estimate of the sparse representation is then utilized in an enhanced data recovering convex algorithm, i.e., the penalized ℓ1 minimization, and an enhanced data recovering greedy algorithm, i.e., the precognition matching pursuit (PMP). The superior reconstruction quality of the proposed method is demonstrated by using data gathered by a WSN located in the Intel Berkeley Research lab.



This paper focusses on the sparse estimation in the situation where both the the sensing matrix and the measurement vector are corrupted by additive Gaussian noises. The performance bound of sparse estimation is analyzed and discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It is shown that the situation with sensing matrix perturbation is more complex than the one with only measurement noise. For the CCRB, its closed-form expression is deduced. It demonstrates a gap between the maximal and nonmaximal support cases. It is also revealed that a gap lies between the CCRB and the MSE of the oracle pseudoinverse estimator, but it approaches zero asymptotically when the problem dimensions tend to infinity. For a tighter bound, the HCRB, despite of the difficulty in obtaining a simple expression for general sensing matrix, a closed-form expression in the unit sensing matrix case is derived for a qualitative study of the performance bound. It is shown that the gap between the maximal and nonmaximal cases is eliminated for the HCRB. Numerical simulations are performed to verify the theoretical results in this paper.

Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information \cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.


In this paper, we consider the problem of collaboratively estimating the sparsity pattern of a sparse signal with multiple measurement data in distributed networks. We assume that each node makes Compressive Sensing (CS) based measurements via random projections regarding the same sparse signal. We propose a distributed greedy algorithm based on Orthogonal Matching Pursuit (OMP), in which the sparse support is estimated iteratively while fusing indices estimated at distributed nodes. In the proposed distributed framework, each node has to perform less number of iterations of OMP compared to the sparsity index of the sparse signal. Thus, with each node having a very small number of compressive measurements, a significant performance gain in support recovery is achieved via the proposed collaborative scheme compared to the case where each node estimates the sparsity pattern independently and then fusion is performed to get a global estimate. We further extend the algorithm to estimate the sparsity pattern in a binary hypothesis testing framework, where the algorithm first detects the presence of a sparse signal collaborating among nodes with a fewer number of iterations of OMP and then increases the number of iterations to estimate the sparsity pattern only if the signal is detected.

Projected gradient (or subgradient) methods are simple and typical approaches to minimize a convex function with constraints. In the area of sparse recovery, however, plenty research results reveal that non-convex penalties induce better sparsity than convex ones. In this paper, the idea of projected subgradient methods is extended to minimize a class of sparsity-inducing penalties with linear constraints, and these penalties are not necessarily convex. To make the algorithm computationally tractable for large scale problems, a uniform approximate projection is applied in the projection step. The theoretical convergence analysis of the proposed method, approximate projected generalized gradient (APGG) method, is provided in the noisy scenario. The result reveals that if the initial solution satisfies some certain requirements, the bound of the recovery error is linear in both the noise term and the step size of APGG. In addition, the parameter selection rules and the initial criteria are analyzed. If the approximate least squares solution is adopted as the initial one, the result reveals how non-convex the penalty could be to guarantee convergence to the global optimal solution. Numerical simulations are performed to test the performance of the proposed method and verify the theoretical analysis. Contributions of this paper are compared with some existing results in the end.

We address the exact recovery of the support of a k-sparse vector with Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS) in a noiseless setting. We consider the scenario where OMP/OLS have selected good atoms during the first l iterations (l lt k) and derive a new sufficient and worst-case necessary condition for their success in k steps. Our result is based on the coherence µ of the dictionary and relaxes Tropp’s well-known condition µ  less than  1/(2k − 1) to the case where OMP/OLS have a partial knowledge of the support.


In this paper a new result of recovery of sparse vectors from deterministic and noisy measurements by l1 minimization is given. The sparse vector is randomly chosen and follows a generic p-sparse model introduced by Candes and al. The main theorem ensures consistency of l1 minimization with high probability. This first result is secondly extended to compressible vectors.

Schlieren deflectometry aims at characterizing the deflections undergone by refracted incident light rays at any surface point of a transparent object. For smooth surfaces, each surface location is actually associated with a sparse deflection map (or spectrum). This paper presents a novel method to compressively acquire and reconstruct such spectra. This is achieved by altering the way deflection information is captured in a common Schlieren Deflectometer, i.e., the deflection spectra are indirectly observed by the principle of spread spectrum compressed sensing. These observations are realized optically using a 2-D Spatial Light Modulator (SLM) adjusted to the corresponding sensing basis and whose modulations encode the light deviation subsequently recorded by a CCD camera. The efficiency of this approach is demonstrated experimentally on the observation of few test objects. Further, using a simple parametrization of the deflection spectra we show that relevant key parameters can be directly computed using the measurements, avoiding full reconstruction.

Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge.
In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result.
In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis.

The focus of this paper is on the development of a sparse and optimal acquisition (SOA) design for diffusion MRI multiple-shell acquisition and beyond. A novel optimality criterion is proposed for sparse multiple-shell acquisition and quasi multiple-shell designs in diffusion MRI and a novel and effective semi-stochastic and moderately greedy combinatorial search strategy with simulated annealing to locate the optimum design or configuration. Even though the number of distinct configurations for a given set of diffusion gradient directions is very large in general---e.g., in the order of 10^{232} for a set of 144 diffusion gradient directions, the proposed search strategy was found to be effective in finding the optimum configuration. It was found that the square design is the most robust (i.e., with stable condition numbers and A-optimal measures under varying experimental conditions) among many other possible designs of the same sample size. Under the same performance evaluation, the square design was found to be more robust than the widely used sampling schemes similar to that of 3D radial MRI and of diffusion spectrum imaging (DSI).


A sparsity-exploiting algorithm intended for few-view Single Photon Emission Computed Tomography (SPECT) reconstruction is proposed and characterized. The algorithm models the object as piecewise constant subject to a blurring operation. To validate that the algorithm closely approximates the true object in the noiseless case, projection data were generated from an object assuming this model and using the system matrix. Monte Carlo simulations were performed to provide more realistic data of a phantom with varying smoothness across the field of view. Reconstructions were performed across a sweep of two primary design parameters. The results demonstrate that the algorithm recovers the object in a noiseless simulation case. While the algorithm assumes a specific blurring model, the results suggest that the algorithm may provide high reconstruction accuracy even when the object does not match the assumed blurring model. Generally, increased values of the blurring parameter and TV weighting parameters reduced noise and streaking artifacts, while decreasing spatial resolution. As the number of views decreased from 60 to 9 the accuracy of images reconstructed using the proposed algorithm varied by less than 3%. Overall, the results demonstrate preliminary feasibility of a sparsity-exploiting reconstruction algorithm which may be beneficial for few-view SPECT.

In this paper, we consider robust system identification under sparse outliers and random noises. In our problem, system parameters are observed through a Toeplitz matrix. All observations are subject to random noises and a few are corrupted with outliers. We reduce this problem of system identification to a sparse error correcting problem using a Toeplitz structured real-numbered coding matrix. We prove the performance guarantee of Toeplitz structured matrix in sparse error correction. Thresholds on the percentage of correctable errors for Toeplitz structured matrices are also established. When both outliers and observation noise are present, we have shown that the estimation error goes to 0 asymptotically as long as the probability density function for observation noise is not "vanishing" around 0.

We consider an important class of signal processing problems where the signal of interest is known to be sparse, and can be recovered from data given auxiliary information about how the data was generated. For example, a sparse Green's function may be recovered from seismic experimental data using sparsity optimization when the source signature is known. Unfortunately, in practice this information is often missing, and must be recovered from data along with the signal using deconvolution techniques.
In this paper, we present a novel methodology to simultaneously solve for the sparse signal and auxiliary parameters using a recently proposed variable projection technique. Our main contribution is to combine variable projection with sparsity promoting optimization, obtaining an efficient algorithm for large-scale sparse deconvolution problems. We demonstrate the algorithm on a seismic imaging example.
We consider a reconciliation problem, where two hosts wish to synchronize their respective sets. Efficient solutions for minimizing the communication cost between the two hosts have been previously proposed in the literature. However, they rely on prior knowledge about the size of the set differences between the two sets to be reconciled. In this paper, we propose a method which can achieve comparable efficiency without assuming this prior knowledge. Our method uses compressive sensing techniques which can leverage the expected sparsity in set differences. We study the performance of the method via theoretical analysis and numerical simulations.

The paper introduces a framework for the recoverability analysis in compressive sensing for imaging applications such as CI cameras, rapid MRI and coded apertures. This is done using the fact that the Spherical Section Property (SSP) of a sensing matrix provides a lower bound for unique sparse recovery condition. The lower bound is evaluated for different sampling paradigms adopted from the aforementioned imaging modalities. In particular, a platform is provided to analyze the well-posedness of sub-sampling patterns commonly used in practical scenarios. The effectiveness of the various designed patterns for sparse image recovery is studied through numerical experiments.


We propose computationally efficient encoders and decoders for lossy compression using a Sparse Regression Code. The codebook is defined by a design matrix and codewords are structured linear combinations of columns of this matrix. The proposed encoding algorithm sequentially chooses columns of the design matrix to successively approximate the source sequence. The algorithm is shown to achieve the optimal distortion-rate function for i.i.d Gaussian sources under the squared-error distortion criterion. For a given rate, the parameters of the design matrix can be varied to trade off distortion performance with encoding complexity. An example of such a trade-off is: computational resource (space or time) per source sample of O((n/log n)^2) and probability of excess distortion decaying exponentially in n/log n, where n is the block length. The Sparse Regression Code is robust in the following sense: for any ergodic source, the proposed encoder achieves the optimal distortion-rate function of an i.i.d Gaussian source with the same variance. Simulations show that the encoder has very good empirical performance, especially at low and moderate rates.


Differential privacy is a promising privacy-preserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an appropriate strategy, processing a batch of correlated queries as a whole achieves considerably higher accuracy than answering them individually. However, to our knowledge there is currently no practical solution to find such a strategy for an arbitrary query batch; existing methods either return strategies of poor quality (often worse than naive methods) or require prohibitively expensive computations for even moderately large domains.
Motivated by this, we propose the \emph{Low-Rank Mechanism} (LRM), the first practical differentially private technique for answering batch queries with high accuracy, based on a \emph{low rank approximation} of the workload matrix. We prove that the accuracy provided by LRM is close to the theoretical lower bound for any mechanism to answer a batch of queries under differential privacy. Extensive experiments using real data demonstrate that LRM consistently outperforms state-of-the-art query processing solutions under differential privacy, by large margins.

This paper is concerned with the computation of the principal components for a general tensor, known as the tensor principal component analysis (PCA) problem. We show that the general tensor PCA problem is reducible to its special case where the tensor in question is super-symmetric with an even degree. In that case, the tensor can be embedded into a symmetric matrix. We prove that if the tensor is rank-one, then the embedded matrix must be rank-one too, and vice versa. The tensor PCA problem can thus be solved by means of matrix optimization under a rank-one constraint, for which we propose two solution methods: (1) imposing a nuclear norm penalty in the objective to enforce a low-rank solution; (2) relaxing the rank-one constraint by Semidefinite Programming. Interestingly, our experiments show that both methods yield a rank-one solution with high probability, thereby solving the original tensor PCA problem to optimality with high probability. To further cope with the size of the resulting convex optimization models, we propose to use the alternating direction method of multipliers, which reduces significantly the computational efforts. Various extensions of the model are considered as well.

This paper proposes a chaos-based analog-to-information conversion system for the acquisition and reconstruction of sparse analog signals. The sparse signal acts as an excitation term of a continuous-time chaotic system and the compressive measurements are performed by sampling chaotic system outputs. The reconstruction is realized through the estimation of the sparse coefficients with principle of chaotic parameter estimation. With the deterministic formulation, the reconstructability analysis is conducted via the sensitivity matrix from the parameter identifiability of chaotic systems. For the sparsity-regularized nonlinear least squares estimation, it is shown that the sparse signal is locally reconstructable if the columns of the sparsity-regularized sensitivity matrix are linearly independent. A Lorenz system excited by the sparse multitone signal is taken as an example to illustrate the principle and the performance.

We study invertibility of matrices of the form $D+R$ where $D$ is an arbitrary symmetric deterministic matrix, and $R$ is a symmetric random matrix whose independent entries have continuous distributions with bounded densities. We show that $|(D+R)^{-1}| = O(n^2)$ with high probability. The bound is completely independent of $D$. No moment assumptions are placed on $R$; in particular the entries of $R$ can be arbitrarily heavy-tailed.

The task of compressed sensing is to recover a sparse vector from a small number of linear and non-adaptive measurements, and the problem of finding a suitable measurement matrix is very important in this field. While most recent works focused on random matrices with entries drawn independently from certain probability distributions, in this paper we show that a partial random symmetric Bernoulli matrix whose entries are not independent, can be used to recover signal from observations successfully with high probability. The experimental results also show that the proposed matrix is a suitable measurement matrix.
The uniqueness of sparsest solutions of underdetermined linear systems plays a fundamental role in the newly developed compressed sensing theory. Several new algebraic concepts, including the sub-mutual coherence, scaled mutual coherence, coherence rank, and sub-coherence rank, are introduced in this paper in order to develop new and improved sufficient conditions for the uniqueness of sparsest solutions. The coherence rank of a matrix with normalized columns is the maximum number of absolute entries in a row of its Gram matrix that are equal to the mutual coherence. The main result of this paper claims that when the coherence rank of a matrix is low, the mutual-coherence-based uniqueness conditions for the sparsest solution of a linear system can be improved. Furthermore, we prove that the Babel-function-based uniqueness can be also improved by the so-called sub-Babel function. Moreover, we show that the scaled-coherence-based uniqueness conditions can be developed, and that the right-hand-side vector $b$ of a linear system, the support overlap of solutions, the orthogonal matrix out of the singular value decomposition of a matrix, and the range property of a transposed matrix can be also integrated into the criteria for the uniqueness of the sparsest solution of an underdetermined linear system.

This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets.
In this article, we present an up-to-date overview of the potential biomedical applications of sodium MRI in vivo. Sodium MRI is a subject of increasing interest in translational research as it can give some direct and quantitative biochemical information on the tissue viability, cell integrity and function, and therefore not only help the diagnosis but also the prognosis of diseases and treatment outcomes. It has already been applied in vivo in most of human tissues, such as brain for stroke or tumor detection and therapeutic response, in breast cancer, in articular cartilage, in muscle and in kidney, and it was shown in some studies that it could provide very useful new information not available through standard proton MRI. However, this technique is still very challenging due to the low detectable sodium signal in biological tissue with MRI and hardware/software limitations of the clinical scanners. The article is divided in three parts: (1) the role of sodium in biological tissues, (2) a short review on sodium magnetic resonance, and (3) a review of some studies on sodium MRI on different organs/diseases to date.

We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high frequency source terms.
Can compression algorithms be employed for recovering signals from their underdetermined set of linear measurements? Addressing this question is the first step towards applying compression algorithms to compressed sensing (CS). In this paper, we consider a family of compression algorithms $\mathcal{C}_R$, parametrized by rate $R$, for a compact class of signals $\mathcal{Q} \subset \mathds{R}^n$. The set of natural images and JPEG2000 at different rates are examples of $\mathcal{Q}$ and $\mathcal{C}_R$, respectively. We establish a connection between the rate-distortion performance of $\mathcal{C}_R$, and the number of linear measurement required for successful recovery in CS. We then propose compressible signal pursuit (CSP) algorithm and prove that, with high probability, it accurately and robustly recovers signals from an underdetermined set of linear measurements. We also explore the performance of CSP in the recovery of infinite dimensional signals. Exploring approximations or simplifications of CSP, which is computationally demanding, is left for the future research.

The aimed high sensitivities and large fields of view of the new generation of interferometers impose to reach high dynamic range of order $\sim$1:$10^6$ to 1:$10^8$ in the case of the Square Kilometer Array. The main problem is the calibration and correction of the Direction Dependent Effects (DDE) that can affect the electro-magnetic field (antenna beams, ionosphere, Faraday rotation, etc.). As shown earlier the A-Projection is a fast and accurate algorithm that can potentially correct for any given DDE in the imaging step. With its very wide field of view, low operating frequency ($\sim30-250$ MHz), long baselines, and complex station-dependent beam patterns, the Low Frequency Array (LOFAR) is certainly the most complex SKA precursor. In this paper we present a few implementations of A-Projection applied to LOFAR that can deal with non-unitary station beams and non-diagonal Mueller matrices. The algorithm is designed to correct for all the DDE, including individual antenna, projection of the dipoles on the sky, beam forming and ionospheric effects. We describe a few important algorithmic optimizations related to LOFAR's architecture allowing us to build a fast imager. Based on simulated datasets we show that A-Projection can give dramatic dynamic range improvement for both phased array beams and ionospheric effects. We will use this algorithm for the construction of the deepest extragalactic surveys, comprising hundreds of days of integration.



The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis or GMCA, to simulated LOFAR-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that GMCA preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the GMCA output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the central 50% of the maps, these coefficients improve to 0.905 and 0.605 respectively and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing.

We propose an efficient strategy to infer sparse Hopfield network based on the magnetizations and pairwise correlations measured through Glauber samplings. This strategy incorporates the $\ell_{1}$ regularization into the Bethe approximation, and is able to further reduce the inference error of the Bethe approximation without the regularization. The optimal regularization parameter is observed to be of the order of $M^{-1/2}$ where $M$ is the number of independent samples.
We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and with low computational complexity. A naive distributed implementation of IHT would require global communication of every agent's full state in each iteration. We find that we can dramatically reduce this communication cost by leveraging solutions to the distributed top-K problem in the database literature. Evaluations show that our algorithm requires up to three orders of magnitude less total bandwidth than the best-known distributed basis pursuit method.
Sonography techniques use multiple transducer elements for tissue visualization. Signals detected at each element are sampled prior to digital beamforming. The required sampling rates are up to 4 times the Nyquist rate of the signal and result in considerable amount of data, that needs to be stored and processed. A developed technique, based on the finite rate of innovation model, compressed sensing (CS) and Xampling ideas, allows to reduce the number of samples needed to reconstruct an image comprised of strong reflectors. A significant drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here we build on previous work and show explicitly how to perform beamforming in the Fourier domain. Beamforming in frequency exploits the low bandwidth of the beamformed signal and allows to bypass the oversampling dictated by digital implementation of beamforming in time. We show that this allows to obtain the same beamformed image as in standard beamforming but from far fewer samples. Finally, we present an analysis based CS-technique that allows for further reduction in sampling rate, using only a portion of the beamformed signal's bandwidth, namely, sampling the signal at sub-Nyquist rates. We demonstrate our methods on in vivo cardiac ultrasound data and show that reductions up to 1/25 over standard beamforming rates are possible.

REGULARIZED BAYESIAN COMPRESSED SENSING IN ULTRASOUND IMAGING by Nicolas Dobigeon, Adrian Basarab, Denis Kouame´ and Jean-Yves Tourneret. 
Compressed sensing has recently shown much interest for ultrasound imaging. In particular, exploiting the sparsity of ultrasound images in the frequency domain, a specific random sampling of ultrasound images can be used advantageously for designing efficient Bayesian image reconstruction methods. We showed in a previous work that assigning independent Bernoulli Gaussian priors to the ultrasound image in the frequency domain provides Bayesian reconstruction errors similar to a classical ℓ1 minimization technique. However, the advantage of Bayesian methods is to estimate the sparsity level of the image by using a hierarchical algorithm. This paper goes a step further by exploiting the spatial correlations between the image pixels in the frequency domain. A new Bayesian model based on a correlated Bernoulli Gaussian model is proposed for that purpose. The parameters of this model can be estimated by sampling the corresponding posterior distribution using an MCMC method. The resulting algorithm provides very low reconstruction errors even when reducing significantly the number of measurements via random sampling.




In this paper, we are concerned with regression problems where covariates can be grouped in nonoverlapping blocks, and where only a few of them are assumed to be active. In such a situation, the group Lasso is an at- tractive method for variable selection since it promotes sparsity of the groups. We study the sensitivity of any group Lasso solution to the observations and provide its precise local parameterization. When the noise is Gaussian, this allows us to derive an unbiased estimator of the degrees of freedom of the group Lasso. This result holds true for any fixed design, no matter whether it is under- or overdetermined. With these results at hand, various model selec- tion criteria, such as the Stein Unbiased Risk Estimator (SURE), are readily available which can provide an objectively guided choice of the optimal group Lasso fit.


Credit: NASA/JPL-Caltech/MSSS, Layered Martian Outcrop 'Shaler' in 'Glenelg' Area


Multiple regularizers, Robust NMF and a simple question


About a year ago we had a few discussions on multiple regularizers ( Multiple Regularizers For the Reconstruction of Natural Objects ? and Optimization with multiple non-standard regularizers ) and it looks like the subject is taking off, see the next few papers:

Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speedup compared to the exact optimization algorithms.



Simultaneously Structured Models with Application to Sparse and Low-rank Matrices by Samet OymakAmin JalaliMaryam FazelYonina C. EldarBabak Hassibi. The abstract reads:
The topic of recovery of a structured model given a small number of linear observations has been well-studied in recent years. Examples include recovering sparse or group-sparse vectors, low-rank matrices, and the sum of sparse and low-rank matrices, among others. In various applications in signal processing and machine learning, the model of interest is known to be structured in \emph{several} ways at the same time, for example, a matrix that is simultaneously sparse and low-rank. An important application is the sparse phase retrieval problem, where the goal is to recover a sparse signal from phaseless measurements. In machine learning, the problem comes up when combining several regularizers that each promote a certain desired structure. Often penalties (norms) that promote each individual structure are known and yield an order-wise optimal number of measurements (e.g., $\ell_1$ norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms. We show that, surprisingly, if we use multi-objective optimization with the individual norms, then we can do no better, order-wise, than an algorithm that exploits only one of the several structures. This result suggests that to fully exploit the multiple structures, we need an entirely new convex relaxation, i.e., not one that is a function of the convex relaxations used for each structure. We then specialize our results to the case of sparse and low-rank matrices. We show that a nonconvex formulation of the problem can recover the model from very few measurements, on the order of the degrees of freedom of the matrix, whereas the convex problem obtained from a combination of the $\ell_1$ and nuclear norms requires many more measurements. This proves an order-wise gap between the performance of the convex and nonconvex recovery problems in this case.

After the use of an iPhone for signal reconstruction (not just display!), here is an iPad involved in Robust PCA decomposition.  One item of further interest is the robust-NMF approach. Alas, like the other preprints, no codes are available:.


Learning Robust Low-Rank Representations by Pablo Sprechmann, Alex M. Bronstein, Guillermo Sapiro. The abstract reads:
In this paper we present a comprehensive framework for learning robust low-rank representations by combining and extending recent ideas for learning fast sparse coding regressors with structured non-convex optimization techniques. This approach connects robust principal component analysis (RPCA) with dictionary learning techniques and allows its approximation via trainable encoders. We propose an efficient feed-forward architecture derived from an optimization algorithm designed to exactly solve robust low dimensional projections. This architecture, in combination with different training objective functions, allows the regressors to be used as online approximants of the exact offline RPCA problem or as RPCA-based neural networks. Simple modifications of these encoders can handle challenging extensions, such as the inclusion of geometric data transformations. We present several examples with real data from image, audio, and video processing. When used to approximate RPCA, our basic implementation shows several orders of magnitude speedup compared to the exact solvers with almost no performance degradation. We show the strength of the inclusion of learning to the RPCA approach on a music source separation application, where the encoders outperform the exact RPCA algorithms, which are already reported to produce state-of-the-art results on a benchmark database. Our preliminary implementation on an iPad shows faster-than-real-time performance with minimal latency.


 RobustNMF showed up earlier here

When one simply consults PubMed with Matrix Factorization as a keyword, one realizes that NMF is being equated to Matrix Factorization (as if there were no other matrix factorization!) and that different fields are using it extensively. Why no code release  for Robust-NMF is beyond me.
If there is no code release, the algorithm cannot be listed on the Advanced Matrix Factorization Jungle page. If you implement some of these algorithms please do let me know and you will be feature on the blog and in the Jungle page.  

Image Credit:NASA/JPL-Caltech/Space Science Institute, PIA14934: A Splendor Seldom Seen



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, December 27, 2012

24AM: 24 efficient parallel SPCA implementations

Wow, here is a goodie that are now on the Advanced Matrix Factorization Jungle page, 24 parallel Sparse PCA implementations:


Given a multivariate data set, sparse principal component analysis (SPCA) aims to extract several linear combinations of the variables that together explain the variance in the data as much as possible, while controlling the number of nonzero loadings in these combinations. In this paper we consider 8 different optimization formulations for computing a single sparse loading vector; these are obtained by combining the following factors: we employ two norms for measuring variance (L2, L1) and two sparsity-inducing norms (L0, L1), which are used in two different ways (constraint, penalty). Three of our formulations, notably the one with L0 constraint and L1 variance, have not been considered in the literature. We give a unifying reformulation which we propose to solve via a natural alternating maximization (AM) method. We show the the AM method is nontrivially equivalent to GPower (Journ\'{e}e et al; JMLR 11:517--553, 2010) for all our formulations. Besides this, we provide 24 efficient parallel SPCA implementations: 3 codes (multi-core, GPU and cluster) for each of the 8 problems. Parallelism in the methods is aimed at i) speeding up computations (our GPU code can be 100 times faster than an efficient serial code written in C++), ii) obtaining solutions explaining more variance and iii) dealing with big data problems (our cluster code is able to solve a 357 GB problem in about a minute).
The 24AM Parallel Sparse PCA Codes are here.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly