In the previous entry ( This (Past) Month in Compressive Sensing and Matrix Factorization ) I did not include other papers/preprints that showed up on my other radar screen this january. Here they are:

Think commenting on a blog is useless, let me present to you the trillion dollar blog comment.

Hein Hundal and Carl Cotner discovered the Predicting the Future blog entry, Galaxy Phones have barometers, the possibility are near endless and Raspberry Pi can enble Camera With Pan and Tilt capabilties. In other news, Keck Foundation awards $1 million to team of UCLA scientists. Anaïs Tamalet has updated the mindmaps on compressive sensing at SLIM. There are here and Dan Reetz pointed me to this patent on Image Acquisition Using Oversampled One-Bit Poisson Statistics. The abstract reads:

In an image sensor pixels in an array of binary pixels are sampled with a first oversampling factor during a first frame interval, and then sampled with a second oversampling factor during a second frame interval, the second oversampling factor exceeding the first oversampling factor.

Subspace clustering refers to the task of fi nding a multi-subspace representation that best fi ts a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [15] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its e ffectiveness.

Imagerie acoustique par approximations parcimonieuses des sources by Antoine PeillotAbstract. The detection and parameter estimation of moving targets is one of the most important tasks in radar. Arrays of randomly distributed antennas have been popular for this purpose for about half a century. Yet, surprisingly little rigorous mathematical theory exists for random arrays that addresses fundamental question such as how many targets can be recovered, at what resolution, at which noise level, and with which algorithm. In a diﬀerent line of research in radar, mathematicians and engineers have invested signiﬁcant eﬀort into the design of radar transmission waveforms which satisfy various desirable properties. In this paper we bring these two seemingly unrelated areas together. Using tools from compressive sensing we derive a theoretical framework for the recovery of targets in the azimuth-range-Doppler domain via random antennas arrays. In one manifestation of our theory we use Kerdock codes as transmission waveforms and exploit some of their peculiar properties in our analysis. Our paper provides two main contributions: (i) We derive the ﬁrst rigorous mathematical theory for the detection of moving targets using random sensor arrays. (ii) The transmitted waveforms satisfy a variety of properties that are very desirable and important from a practical viewpoint. Thus our approach does not just lead to useful theoretical insights, but is also of practical importance. Various extensions of our results are derived and numerical simulations conﬁrming our theory are presented.

La description parcimonieuse des sources permet une approche nouvelle de l'analyse des champs acoustiques. Durant ce projet, nous avons appliqué ce principe à plusieurs scénarios classiques : l'holographie acoustique de champ proche, la localisation de sources simples ou complexes et l'identification de directivité de sources. Ces méthodes d'imagerie exigent la résolution de problèmes inverses, souvent mal posés, qui nécessitent l'utilisation conjointe de techniques de régularisation. De plus, pour capter l'information utile et assurer de bonnes performances de reconstruction, les techniques traditionnelles d'antennerie nécessitent le déploiement d'un grand nombre de microphones. Dans ces travaux, nous avons envisagé une approche originale de l'analyse des champs acoustiques basée sur l'approximation parcimonieuse des sources, ce qui agit comme un principe de régularisation. Cette formulation permet en outre de tirer profit de la méthode de "compressive sampling" (CS), qui permet de restreindre le nombre de mesures utiles à la résolution du problème inverse si la source à reconstruire admet une représentation suffisamment parcimonieuse. On montre que l'application du CS à l'holographie en champ proche de plaques homogènes et isotropes permet non seulement de mieux régulariser le problème par rapport aux techniques génériques classiques, mais également de diminuer fortement le nombre de microphones en sous-échantillonnant l'hologramme au-delà de la limite imposée par la théorie de Shannon. Le problème de localisation de sources, envisagée comme un problème parcimonieux, permet la localisation avec une haute résolution de sources corrélés, en champ proche comme en champ lointain. Les méthodes de reconstruction parcimonieuse permettent de structurer la base de parcimonie en l'enrichissant avec un modèle de décomposition des sources en harmoniques sphériques pour localiser et identifier la directivité de sources complexes. Ces études ont finalement nécessité le développement de techniques rapides de calibration en position et en gain d'antennes composées d'un grand nombre de microphones.Acoustic sources imaging in the sparsity frameworkIn this work the principle of sparsity-based techniques have been applied to acoustic imaging issues. It includes nearfield acoustic holography (NAH), complex sources localization and directivity pattern identification. These techniques consist in the inversion of ill-posed problems which involve the use of regularization schemes. Moreover, standard regularization methods often require a huge number of microphones in order to oversample the acoustic field and therefore avoiding aliasing effects. To overcome these problems, we investigate sparse regularization principles and/or compressive sampling (CS) for the analysis of acoustic fields. CS states that, under the sparsity assumption of the source to recover, it is possible to significantly reduce the number of measurements (i.e., microphones), even well below spatial Nyquist rates. It is shown that sparsity-based NAH techniques lead to significant improvements over standard NAH techniques. A sub-Nyquist random sampling combined with sparse regularization allows the precise source reconstruction. The problem of source localization can be recast in a sparse framework. It acts as a high-resolution localisation method for correlated and uncorrelated sources that lie in the near field or in the far field. The use of sparsity-promoting algorithms allows the localization of complex sources by improving the sparse model with a spherical harmonic dictionary. This method is applied to the identification of sources directivity pattern. Finally, microphone position self-calibration methods are investigated to experimentally manage large microphone arrays.

Robust Late Fusion With Rank Minimization by Guangnan Ye, Dong Liu, I-Hong Jhuo, Shih-Fu Chang

In this paper, we propose a rank minimization method to fuse the predicted conﬁdence scores of multiple models, each of which is obtained based on a certain kind of feature. Speciﬁcally, we convert each conﬁdence score vector obtained from one model into a pairwise relationship matrix, in which each entry characterizes the comparative relationship of scores of two test samples. Our hypothesis is that the relative score relations are consistent among component models up to certain sparse deviations, despite the large variations that may exist in the absolute values of the raw scores. Then we formulate the score fusion problem as seeking a shared rank-2 pairwise relationship matrix based on which each original score matrix from individual model can be decomposed into the common rank-2 matrix and sparse deviation errors. A robust score vector is then extracted to ﬁt the recovered low rank score relation matrix. We formulate the problem as a nuclear norm and 1 norm optimization objective function and employ the Augmented Lagrange Multiplier (ALM) method for the optimization. Our method is isotonic (i.e., scale invariant) to the numeric scales of the scores originated from different models. We experimentally show that the proposed method achieves signiﬁcant performance gains on various tasks including object categorization and video event detection.

$S_{0.5}$ Regularization and Fixed Point Algorithm for Low-Rank Matrix Recovery by Peng Dingtao, Xiu Naihua, Yu Jian

Abstract: The low-rank matrix recovery is mathematically matrix rank minimization problem under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use nuclear norm to approximate the rank of matrices, in this paper we use Schatten 1/2 norm approximation which leads to a better approximation but a nonconvex, nonsmooth, and non-Lipschitz optimization problem. Through developing a fixed point theory associated with the singular value half thresholding operator for $S_{1/2}$ regularization solutions, we propose a fixed point algorithm for solving the $S_{1/2}$ regularization problem and prove the convergence of the iterative sequence. By discussing the location of the optimal regularization parameter and its setting as well as using an approximate singular value decomposition procedure, we get an efficient algorithm for $S_{1/2}$ regularization which we call HalfnormFPA algorithm (half norm fixed point algorithm with an approximate SVD). Large numbers of numerical experiments on randomly generated and real matrix completion problems are demonstrated for HalfnormFPA together with several state-of-the-art solvers such as SVT, FPC and FPCA. The numerical results clearly show that HalfnormFPA is very fast, robust and powerful, and that it provides much better recoverability than all above state-of-the-art solvers.

Energy-guided learning approach to compressive FD-OCT

by Shimon Schwartz, Chenyi Liu, Alexander Wong, David A. Clausi,Paul Fieguth, and Kostadinka Bizheva (pdf is here)

High quality, large size volumetric imaging of biological tissue with optical coherence tomography (OCT) requires large number and high density of scans, which results in large data acquisition volume. This may lead to corruption of the data with motion artifacts related to natural motion of biological tissue, and could potentially cause conflicts with the maximum permissible exposure of biological tissue to optical radiation. Therefore, OCT can benefit greatly from different approaches to sparse or compressive sampling of the data where the signal is recovered from its sub-Nyquist measurements. In this paper, a new energy-guided compressive sensing approach is proposed for .

Compressed sensing subtracted rotational angiography with multiple sparse penalty by Hélène Langet , C. Riddell , Y. Trousset, Arthur Tenenhaus, Elisabeth Lahalle, Gilles Fleury 1, Nikos Paragios

Digital Subtraction Rotational Angiography (DSRA) is a clinical protocol that allows three-dimensional (3D) visualization of vasculature during minimally invasive procedures. C-arm systems that are used to generate 3D reconstructions in interventional radiology have limited sampling rate and thus, contrast resolution. To address this particular subsampling problem, we propose a novel iterative reconstruction algorithm based on compressed sensing. To this purpose, we exploit both spatial and temporal sparsity of DSRA. For computational efficiency, we use a proximal implementation that accommodates multiple '1-penalties. Experiments on both simulated and clinical data confirm the relevance of our strategy for reducing subsampling streak artifacts.

Novel signal separation algorithm based on compressed sensing for wideband spectrum sensing in cognitive radio networks by Shoutao Lv and Jian Liu,

Optimal Detection For Sparse Mixtures by Tony Cai and Yihong Wu

Abstract: Detection of sparse signals arises in a wide range of modern scientific studies. The focus so far has been mainly on Gaussian mixture models. In this paper, we consider the detection problem under a general sparse mixture model and obtain an explicit expression for the detection boundary. It is shown that the fundamental limits of detection is governed by the behavior of the log-likelihood ratio evaluated at an appropriate quantile of the null distribution. We also establish the adaptive optimality of the higher criticism procedure across all sparse mixtures satisfying certain mild regularity conditions. In particular, the general results obtained in this paper recover and extend in a unified manner the previously known results on sparse detection far beyond the conventional Gaussian model and other exponential families.

An Improved Compressive Sensing Reconstruction Algorithm Using Linear/Non-Linear Mapping by Xinyu Zhang, Jiangtao Wen, Yuxing Han and John Villasenor

Abstract— We describe an improved algorithm for signal reconstruction based on the Orthogonal Matching Pursuit (OMP) algorithm. In contrast with the traditional implementation of OMP in compressive sensing (CS) we introduce a preprocessing step that converts the signal into a distribution that can be more easily reconstructed. This preprocessing introduces negligible additional complexity, but enables a signiﬁcant performance improvement in the reconstruction accuracy

Vectorial Phase Retrieval of 1-D signals by Oren Raz, Nirit Dudovich and Boaz Nadler

Abstract—Reconstruction of signals from measurements oftheir spectral intensities, also known as the phase retrieval problem, is of fundamental importance in many scientiﬁc ﬁelds.In this paper we present a novel framework, denoted as vectorial phase retrieval, for reconstruction of pairs of signals from spectral intensity measurements of the two signals and of the interference. We show that this new framework can alleviate some of the theoretical and computational challenges associated with classical phase retrieval from a single signal. First, we prove that for compactly supported signals, in the absence of measurement noise, this new setup admits a unique solution. Next,we present a statistical analysis of vectorial phase retrieval and derive a computationally efﬁcient algorithm to solve it. Finally,we illustrate via simulations, that our algorithm can accurately reconstruct signals even at considerable noise levels.

Phase retrieval combined with digital holography by Eliyahu Osherovich, Michael Zibulevsky, and Irad Yavneh

We present a new method for real- and complex-valued image reconstruction from two intensity measurements made in the Fourier plane: the Fourier magnitude of the unknown image, and the intensity of the interference pattern arising from superimposition of the original signal with a reference beam. This approach can provide signi cant advantages in digital holography since it poses less stringent requirements on the reference beam. In particular, it does not require spatial separation between the sought signal and the reference beam. Moreover, the reference beam need not be known precisely, and in fact, may contain severe errors, without leading to a deterioration in the reconstruction quality. Numerical simulations are presented to demonstrate the speed and quality of reconstruction.

Combining Factorization Model and Additive Forest for Collaborative Followee Recommendation by Tianqi Chen, Linpeng Tang, Qin Liu, Diyi Yang, Saining Xie, Xuezhi Cao, Chunyang Wu, Enpeng Yao, Zhengyang Liu, Zhansheng Jiang, Cheng Chen, Weihao Kong, Yong Yu

Social networks have become more and more popular in recent years. This popularity creates a need for personalization services to recommend tweets, posts (information) and celebrities organizations (information sources) to users according to their potential interest. Tencent Weibo (microblog) data in KDD Cup 2012 brings one such challenge to the researchers in the knowledge discovery and data mining community. Compared to traditional scenarios in recommender systems, the KDD Cup 2012 Track 1 recommendation task raises several challenges: (1) Existence of multiple, heterogeneous data sources; (2) Fast growth of the social network with a large number of new users, which causes a severe user cold-start problem; (3) Rapid evolution of items’ popularity and users’ interest. To solve these problems, we combine feature-based factorization models with additive forest models. Speciﬁcally, we ﬁrst build factorization models that incorporate users’ social network, action, tag/keyword, proﬁle and items’ taxonomy information. Then we develop additive forest models to capture users’ activity and sequential patterns. Because of the additive nature of such models, they allow easy combination of the results from previous factorization models. Our modeling approach is able to utilize various side information provided by the challenge dataset, and thus alleviates the cold-start problem. The new temporal dynamics model we have proposed using an additive forest can automatically adjust the splitting time points to model popularity evolution more accurately. Our ﬁnal solution obtained an MAP@3 of 0:4265 on the private leader board, giving us the ﬁrst place in Track 1 of KDD Cup 2012.

Join the CompressiveSensing subreddit or the Google+ Community and post there !

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment