Compressive Sensing with Local Geometric Features by Rishi Gupta, Piotr Indyk, Eric Price, Yaron Rachlin. The abstract reads:
We propose a framework for compressive sensing of images with local geometric features. Speci cally, let x 2 RN be an N-pixel image, where each pixel p has value xp. The image is acquired by computing the measurement vector Ax, where A is an m N measurement matrix for some m N. The goal is then to design the matrix A and recovery algorithm which, given Ax, returns an approximation to x. In this paper we investigate this problem for the case where x consists of a small number (k) of \local geometric objects" (e.g., stars in an image of a sky), plus noise. We construct a matrix A and recovery algorithm with the following features: (i) the number of measurements m is O(k logk N), which undercuts currently known schemes that achieve m = O(k log(N=k)) (ii) the matrix A is ultra-sparse, which is important for hardware considerations (iii) the recovery algorithm is fast and runs in time sub-linear in N. We also present a comprehensive study of an application of our algorithm to a problem in satellite navigation.
Of note is the excerpt on adapting this scheme to star trackers:
...Computation power on a satellite is substantially lower than that of a low end laptop, and given that the entire acquisition has to happen in .1 to 2 seconds, it seems unlikely that any algorithm linear or near linear in N is going to be practical. Finally, we note that all plot lines in the results could be improved by a more thorough implementation of a star identification algorithm....
The Master of Engineering thesis on which this paper is based on is A Compressive Sensing Algorithm for Attitude Determination by Rishi Gupta. The abstract reads:
We propose a framework for compressive sensing of images with local distinguishable objects, such as stars, and apply it to solve a problem in celestial navigation. Speci cally, let x 2 RN be an N- pixel image, consisting of a small number of local distinguishable objects plus noise. Our goal is to design an m N measurement matrix A with m N, such that we can recover an approximation to x from the measurements Ax. We construct a matrix A and recovery algorithm with the following properties: (i) if there are k objects, the number of measurements m is O((k log N)=(log k)), undercutting the best known bound of O(k log(N=k)) (ii) the matrix A is ultra-sparse, which is important when the signal is weak relative to the noise, and (iii) the recovery algorithm is empirically fast and runs in time sub-linear in N. We also present a comprehensive study of the application of our algorithm to attitude determination, or finding one's orientation in space. Spacecraft typically use cameras to acquire an image of the sky, and then identify stars in the image to compute their orientation. Taking pictures is very expensive for small spacecraft, since camera sensors use a lot of power. Our algorithm optically compresses the image before it reaches the camera's array of pixels, reducing the number of sensors that are required.
The algorithm is here or will be ( Rishi tells me it will be there in about ten days). Thanks Rishi.
Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images by Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Lingbo Li, Zhengming Xing, David Dunson, Guillermo Sapiro and Lawrence Carin. the abstract reads:
Nonparametric Bayesian methods are considered for recovery of imagery based upon compressive, incomplete and/or noisy measurements. A truncated beta-Bernoulli process is employed to infer an appropriate dictionary for the data under test, and also for image recovery. In the context of compressive sensing, signiﬁcant improvements in image recovery are manifested using learned dictionaries, relative to using standard orthonormal image expansions. The compressive-measurement projections are also optimized for the learned dictionary. Additionally, we consider simpler (incomplete) measurements, deﬁned by measuring a subset of image pixels, selected uniformly at random. Spatial inter-relationships within imagery are exploited through use of the Dirichlet and probit stick-breaking processes. Several example results are presented, with comparisons to other methods in the literature.
I mentioned recently Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations by Vincent Y. F. Tan, Laura Balzano, Stark C. Draper. It looks like there is an attendant slide presentation and a more complete version here: Rank Minimization over Finite Fields: Fundamental Limits and Coding-Theoretic Interpretations by Vincent Y. F. Tan, Laura Balzano, Stark C. Draper. The abstract reads:
This paper establishes information-theoretic limits in estimating a ﬁnite ﬁeld low-rank matrix given random linear measurements of it. These linear measurements are obtained by taking inner products of the unknown matrix with random sensing matrices in the ﬁnite ﬁeld. Necessary and sufﬁcient conditions on the number of measurements required are provided. It is shown that these conditions are sharp. The reliability function of the minimum-rank decoder is also derived by appealing to de Caen’s lower bound on the probability of a union. The sufﬁcient conditions also hold in the case where the sensing matrices are sparse – a scenario that may be amenable to faster decoding. More precisely, it is shown that if the n n-sensing matrices contain, on average, (nlog n) entries, the number of measurements required is the same as that when the sensing matrices are dense and contain entries drawn uniformly at random from the ﬁeld. Analogies are drawn between the above results and rank-metric codes in the coding theory literature. In particular, we derive minimum (rank) distance properties of equiprobable and sparse rank-metric codes. These distance properties provide a precise geometric interpretation of the fact that the sparse ensemble requires as few measurements as the dense one. Finally, we provide a non-exhaustive procedure to search for the unknown low-rank matrix.
Simple Bounds for Low-complexity Model Reconstruction by Emmanuel Candes and Benjamin Recht . The abstract reads:
This note presents a uniﬁed analysis of the recovery of simple objects from random linear measurements. When the linear functionals are Gaussian, we show that an s-sparse vector in Rn can be efﬁciently recovered from 2s log n measurements with high probability and a rank r, n n matrix can be efﬁciently recovered from r(6n 5r) with high probability. For sparse vectors, this is within an additive factor of the best known nonasymptotic bounds. For low-rank matrices, this matches the best known bounds. We present a parallel analysis for block sparse vectors obtaining similarly tight bounds. In the case of sparse and block sparse signals, we additionally demonstrate that our bounds are only slightly weakened when the measurement map is a random sign matrix. Our results are based on analyzing a particular dual point which certiﬁes optimality conditions of the respective convex programming problem. Our calculations rely only on standard large deviation inequalities and our analysis is self-contained.
The application of compressive sampling to radio astronomy I: Deconvolution by Feng Li, Tim J. Cornwell, Frank de Hoog. The abstract read:
Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.
The application of compressive sampling to radio astronomy II: Faraday rotation measure synthesis by Feng Li, Tim J. Cornwell, Frank de Hoog. The abstract reads:
Faraday rotation measure (RM) synthesis is an important tool to study and analyze galactic and extra-galactic magnetic fields. Since there is a Fourier relation between the Faraday dispersion function and the polarized radio emission, full reconstruction of the dispersion function requires knowledge of the polarized radio emission at both positive and negative square wavelengths $\lambda^2$. However, one can only make observations for $\lambda^2 > 0$. Furthermore observations are possible only for a limited range of wavelengths. Thus reconstructing the Faraday dispersion function from these limited measurements is ill-conditioned. In this paper, we propose three new reconstruction algorithms for RM synthesis based upon compressive sensing/sampling (CS). These algorithms are designed to be appropriate for Faraday thin sources only, thick sources only, and mixed sources respectively. Both visual and numerical results show that the new RM synthesis methods provide superior reconstructions of both magnitude and phase information than RM-CLEAN
Large-Scale Convex Minimization with a Low-Rank Constraint by Shai Shalev-Shwartz, Alon Gonen, Ohad Shamir. The abstract reads:
We address the problem of minimizing a convex function over the space of large matrices with low rank. While this optimization problem is hard in general, we propose an efficient greedy algorithm and derive its formal approximation guarantees. Each iteration of the algorithm involves (approximately) finding the left and right singular vectors corresponding to the largest singular value of a certain matrix, which can be calculated in linear time. This leads to an algorithm which can scale to large matrices arising in several applications such as matrix completion for collaborative filtering and robust low rank matrix approximation.
Reconstruction from anisotropic random measurements by Mark Rudelson, Shuheng Zhou. The abstract reads:
Link Delay Estimation via Expander Graphs by Mohammad H. Firooz, Sumit Roy. The abstract reads:Random matrices are widely used in sparse recovery problems, and the relevant properties of matrices with i.i.d. entries are well understood. The current paper discusses the recently introduced Restricted Eigenvalue (RE) condition, which is among the most general assumptions on the matrix, guaranteeing recovery. We prove a reduction principle showing that the RE condition can be guaranteed by checking the restricted isometry on a certain family of low-dimensional subspaces. This principle allows us to establish the RE condition for several broad classes of random matrices with dependent entries, including random matrices with subgaussian rows and non-trivial covariance structure, as well as matrices with independent rows, and uniformly bounded entries.
In network tomography, we seek to infer the status of parameters (such as delay) for links inside a network through end-to-end probing between (external) boundary nodes along chosen routes. In this work, we apply concepts from compressed sensing to establish conditions on the routing matrix under which it is possible to estimate link delay from end-to-end measurements and also provide an upper-bound on the estimation error. Further, we investigate choice of the appropriate set of paths within a given network with defined boundary nodes, so as to minimize the number of probes needed for estimation. Simulation results show that the proposed algorithm will, with high probability, achieve forty percent reduction in overhead required compared to probing between every available pair of boundary nodes.
Causal Network Inference via Group Sparse Regularization by Andrew Bolstad, Barry Van Veen, Robert Nowak. The abstract reads:
This paper addresses the problem of inferring sparse causal networks modeled by multivariate auto-regressive (MAR) processes. Conditions are derived under which the Group Lasso (gLasso) procedure consistently estimates sparse network structure. The key condition involves a "false connection score." In particular, we show that consistent recovery is possible even when the number of observations of the network is far less than the number of parameters describing the network, provided that the false connection score is less than one. The false connection score is also demonstrated to be a useful metric of recovery in non-asymptotic regimes. The conditions suggest a modified gLasso procedure which tends to improve the false connection score and reduce the chances of reversing the direction of causal influence. Computational experiments and a real network based electrocorticogram (ECoG) simulation study demonstrate the effectiveness of the approach.