With the end of the semester for several universities we have some theses showing up on the radar screen. We also have a new crop of blog entries, papers and presentations. At the end some of the papers are marginally connected to compressive sensing but they are interesting. Finally, there is a announcement for a workshop. First things first, Danny Bickson has a blog, it is:

http://bickson.blogspot.com/

Following up on last week, here are two other summaries of the sublinear algorithm conference in Italy:

Now onto the papers/preprints:

http://bickson.blogspot.com/

Following up on last week, here are two other summaries of the sublinear algorithm conference in Italy:

Now onto the papers/preprints:

Parallel Coordinate Descent for L1-Regularized Loss Minimization by Joseph K. Bradley, Aapo Kyrola, Danny Bickson, Carlos Guestrin. The abstract reads:

We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1.Compressive Identification of Linear Operators by Reinhard Heckel, Helmut Bölcskei. The abstract reads:

We consider the problem of identifying a linear deterministic operator from an input-output measurement. For the large class of continuous (and hence bounded) operators, under additional mild restrictions, we show that stable identifiability is possible if the total support area of the operator's spreading function satisfies D <= 1/2. This result holds for arbitrary (possibly fragmented) support regions of the spreading function, does not impose limitations on the total extent of the support region, and, most importantly, does not require the support region of the spreading function to be known prior to identification. Furthermore, we prove that asking for identifiability of only almost all operators, stable identifiability is possible if D <= 1. This result is surprising as it says that there is no penalty for not knowing the support region of the spreading function prior to identification.

Signal Recovery Method for Compressive Sensing Using Relaxation and Second-Order Cone Programming by Flavio C. A. Teixeira, Stuart W. A. Bergen, and Andreas Antoniou. The abstract reads:

A signal recovery method for compressive sensing under noisy measurements is proposed. The problem is formulated as a nonconvex nonsmooth constrained optimization problem that uses the smoothly clipped absolute deviation (SCAD) function to promote sparsity. Relaxation is employed by means of a series of local linear approximations (LLAs) of the SCAD in a constrained formulation. The relaxation is shown to converge to a minimum of the original nonconvex constrained optimization problem. In order to solve each nonsmooth convex relaxation problem, a second-order cone programming (SOCP) formulation is used, which can be applied by using standard state-of-the-art SOCP solvers such as SeDuMi. Experimental results demonstrate that signals recovered using the proposed method exhibit reduced ℓ∞ reconstruction error when compared with competing methods such as ℓ1-Magic. Simulations demonstrate that signiﬁcant reduction in the reconstruction error can be achieved with computational cost that is comparable to that required bythe ℓ1-Magic algorithm.

Low-rank matrix recovery via iteratively reweighted least squares minimization by Massimo Fornasier and Holger Rauhut, Rachel Ward. The abstract reads:

We present and analyze an e cient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements ful ll a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments which con rm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature.

Sparse recovery for spherical harmonic expansions by Holger Rauhut, Rachel Ward. The abstract reads:

We show that sparse spherical harmonic expansions can be efficiently recovered from a small number of randomly chosen samples on the sphere. To establish the main result, we verify the restricted isometry property of an associated preconditioned random measurement matrix using recent estimates on the uniform growth of Jacobi polynomials.

On Low Rank Matrix Approximations with Applications to Synthesis Problem in Compressed Sensing by Anatoli Juditsky, Fatma Kilinc Karzan, Arkadi Nemirovski. The abstract reads:

We consider the synthesis problem of Compressed Sensing { given s and an M £ n matrix A, extract from A an m £ n submatrix Am, with m as small as possible, which is s-good, that is, every signal x with at most s nonzero entries can be recovered from observation Amx by `1 minimization: x = argminufkuk1 : Amu = Amxg. We show that under reasonable assumptions the synthesis problem can be reformulated as the problem of entry-wise approximation, within a given accuracy, of n £ n matrix W = YT A, with Y 2 RM£n given, by a matrix of the form YT m Am, with Am comprised of m rows of A. We propose randomized algorithms for efficiently solving the latter problem with accuracy guaranties EfkW ¡YT m Amk1g · O(1)L(Y; A) qln(n)m Here L(Y; A) is an easy-to-specify quantity which in good cases . is a moderate absolute constant (e.g., L(A; A) = 1 when A is the Hadamard matrix, and similarly for the matrix of Fourier transform on any finite Abelian group). We also supply derandomized versions of the approximation algorithms which do not require random sampling of matrices and attain the same accuracy bounds. We further demonstrate that in terms of approximation accuracy our algorithms are optimal up to logarithmic in n factors. Finally, we provide preliminary numerical results on the performance of our algorithms for the synthesis problem.

GREEDY PURSUITS FOR COMPRESSED SENSING OF JOINTLY SPARSE SIGNALS by Dennis Sundman, Saikat Chatterjee and Mikael Skoglund. The abstract reads:

For compressed sensing with jointly sparse signals, we present a new signal model and two new joint iterative greedy-pursuit recovery algorithms. The signal model is based on the assumption of a jointly shared support-set and the joint recovery algorithms have knowledge of the size of the shared support-set. Through experimental evaluation, we show that the new joint algorithms provide signiﬁcant performance improvements compared to regular algorithms which do not exploit a joint sparsity.

FAST LEVEL SET ESTIMATION FROM PROJECTION MEASUREMENTS by Kalyani Krishnamurthy, Waheed U. Bajwa, Rebecca Willett, and Robert Calderbank. The abstract reads:

Estimation of the level set of a function (i.e., regions where the function exceeds some value) is an important problem with applications in digital elevation maps, medical imaging, and astronomy. In many applications, however, the function of interest is acquired through indirect measurements, such as tomographic projections, coded-aperture measurements, or pseudo-random projections associated with compressed sensing. This paper describes a new methodology and associated theoretical analysis for rapid and accurate estimation of the level set from such projection measurements. The proposed method estimates the level set from projection measurements without an intermediate function reconstruction step, thereby leading to signiﬁcantly faster computation. In addition, the coherence of the projection operator and McDiarmid’s inequality are used to characterize the estimator’s performance.

Errata: Sampling Trajectories for Sparse Image Recovery by Rebecca Willett. The abstract reads:

A key theorem regarding the Restricted Isometry Property for short sampling trajectories contained a normalization error which renders the statement of the theorem incorrect. This note describes the original claim and proof, the normalization issue and how it impacts the proof, and several avenues for future investigation.

There is also a new version of New and improved Johnson-Lindenstrauss embeddings via the Restricted Isometry Property by Felix Krahmer, Rachel Ward. The abstract reads:

Consider an m by N matrix Phi with the Restricted Isometry Property of order k and level delta, that is, the norm of any k-sparse vector in R^N is preserved to within a multiplicative factor of 1 +- delta under application of Phi. We show that by randomizing the column signs of such a matrix Phi, the resulting map with high probability embeds any fixed set of p = O(e^k) points in R^N into R^m without distorting the norm of any point in the set by more than a factor of 1 +- delta. Consequently, matrices with the Restricted Isometry Property and with randomized column signs provide optimal Johnson-Lindenstrauss embeddings up to logarithmic factors in N. In particular, our results improve the best known bounds on the necessary embedding dimension m for a wide class of structured random matrices; for partial Fourier and partial Hadamard matrices, we improve the recent bound m = O(delta^(-4) log(p) log^4(N)) appearing in Ailon and Liberty to m = O(delta^(-2) log(p) log^4(N)), which is optimal up to the logarithmic factors in N. Our results also have a direct application in the area of compressed sensing for redundant dictionaries.

and a new version of Performance bounds for expander-based compressed sensing in Poisson noise by Maxim Raginsky, Sina Jafarpour, Zachary Harmany, Roummel Marcia, Rebecca Willett, Robert Calderbank.

and also a published version of Simultaneous support recovery in high dimensions: Benefits and perils of block $\ell_1/\ell_\infty$-regularization by Sahand Negahban and Martin J. Wainwright.

Presentations:

- Near Theoretical Lower Bound Sparse Representation and Compressive Sampling by Vahid Tarokh
- Signal Recovery Method for Compressive Sensing Using Relaxation and Second-Order Cone Programming by Flavio C. A. Teixeira , Stuart W. A. Bergen Andreas Antoniou. (cached version is here)

The three theses are:

SONAR Imaging using Compressive Sensing by Taylor Williams. The abstract reads::

Sonar imaging is a technique that can locate objects in a scene from their acoustic reflections. It utilizes multiple measurements from different angles to create a 2D image. Conventional imaging algorithms such as backprojection (BP) require a large number of measurements to produce accurate images. Theory indicates that an alternative approach called compressive sensing (CS) can create accurate images using just a fraction of the measurements. CS requires that the scene is compressible and that each individual measurement provides information about the scene, not just a particular location. The sonar imaging experiment must be designed to meet these requirements. The purpose of this study is to show that the compressive sensing framework can be used in sonar imaging with results comparable to conventional methods. A sonar test stand was built that can measure acoustic reflections from a scene. A pseudo-random noise (PN) code was used for transmission. Four speakers and 16 microphones were mounted randomly perturbed from a circle surrounding the scene. Software was developed to create an image from this data through FBP. Also, a CS algorithm was developed to reconstruct an image from limited measurements. This algorithm uses a random subset of samples from each measurement. Initial results show that FBP can effectively be used to image a scene using acoustic waves. The CS algorithm yields a similar quality image using less than 10% of the measurements. These results show that CS can be used in sonar imaging to greatly reduce the number of measurements that need to be collected. The findings are directly portable to radar imaging, a field with a high level of research for both military and civilian uses

Constrained imaging: denoising and sparse sampling by Justin Haldar. The abstract reads:

Magnetic resonance imaging (MRI) is a powerful tool for studying the anatomy, physiology, and metabolism of biological systems. Despite the fact that MRI was introduced decades ago and has already revolutionized medical imaging, current applications are still far from utilizing the full potential of the MR signal. Traditional MRI data acquisition and image reconstruction methods are based on simple Fourier inversion, leading to undesirable trade-offs between image resolution, signal-to-noise ratio (SNR), and data acquisition time. Classical approaches to addressing these trade-offs have relied on improved imaging hardware and more efficient pulse sequences. In contrast, our work addresses the limitations of MR using relatively less-explored signal processing approaches, which have recently become practical because of increasing computational capabilities. This dissertation concerns the use of constrained imaging models to guide the design of both data acquisition and image reconstruction, leading to improved imaging performance in the context of both noise-limited and resolution-limited scenarios. To address noise limitations for high-resolution imaging, we introduce a quasi-Bayesian edge-preserving smoothness prior for modeling correlated image sequences. The prior models the correlated edge structures that are observed in the image sequence, and is used within a penalized maximum likelihood framework to reduce image noise while preserving high-resolution anatomical structure. In contrast to many constrained imaging methods, we demonstrate that the proposed method is relatively simple to analyze and is robust to model inaccuracy when reconstruction parameters are chosen appropriately. Resolution and SNR analysis shows that the proposed formulations lead to substantial improvements in SNR with only a moderate decrease in spatial resolution. An examination of resolution and SNR trade-offs is presented, which serves as a guide for the optimal design of data acquisition and image reconstruction procedures in this context. To address limited spatial resolution in high-SNR scenarios, we design specialized data acquisition and image reconstruction procedures to enable image reconstruction from sparsely-sampled data. Specifically, we leverage prior information that the image has sparse or low-rank structure to significantly reduce sampling requirements in two different contexts. In the first context, we assume that the image is sparse in a known transform domain, and develop a novel non-Fourier data acquisition scheme to enable high-quality reconstruction from undersampled data. The second context is specific to spatiotemporal imaging, and it is assumed that the temporal evolution of the spatiotemporal image is highly correlated at different spatial positions. This correlation leads to the formulation of a novel low-rank matrix recovery problem, which we demonstrate can be solved efficiently and effectively using special algorithms. Applications of the proposed techniques are illustrated with simulated and experimental data from a variety of different MR imaging scenarios.

And an older Master's thesis: Image compression and recovery using compressive sampling and particle swarm optimization. by Benjamin Van Ruitenbeek,. The abstract reads:

We present a novel method for sparse signal recovery using Particle Swarm Optimization and demonstrate an application in image compression. Images are compressed with compressive sampling, and then reconstructed with particle swarm techniques. Several enhancements to the basic particle swarm algorithm are shown to improve signal recovery accuracy. We also present techniques specifically for reconstructing sparse image data and evaluate their performance.Not strictly related to CS:

Multi-Scale Dictionary Learning using Wavelets by Boaz Ophir, Michael Lustig and Michael Elad. The abstract reads:

Audio Inpainting by Amir Adler, Valentin Emiya, Maria G. Jafari, Michael Elad, Remi Gribonval and Mark D. Plumbley. The abstract reads:In this paper we present a multi-scale dictionary learning paradigm for sparse and redundant signal representations. The appeal of such a dictionary is obvious - in many cases data naturally comes at different scales. A multi-scale dictionary should be able to combine the advantages of generic multiscale representations (such as Wavelets), with the power of learnt dictionaries, in capturing the intrinsic characteristics of a family of signals. Using such a dictionary would allow representing the data in a more efﬁcient, i.e. sparse, manner, allowing applications to take a more global look at the signal. In this work we aim to achieve this goal without incurring the costs of an explicit dictionary with large atoms. The K-SVD using Wavelets approach presented here applies dictionary learning in the analysis domain of a ﬁxed multi-scale operator. This way, sub-dictionaries at different data scales, consisting of small atoms, are trained. These dictionaries can then be efﬁciently used in sparse coding for various image processing applications, potentially outperforming both single-scale trained dictionaries and multi-scale analytic ones. In this paper we demonstrate this construction and discuss its potential through several experiments performed on ﬁngerprint and coastal scenery image

We propose the Audio Inpainting framework that recovers audio intervals distorted due to impairments such as impulsive noise, clipping, and packet loss. In this framework, the distorted samples are treated as missing, and the signal is decomposed into overlapping time-domain frames. The restoration problem is then formulated as an inverse problem per audio frame. Sparse representation modeling is employed per frame, and each inverse problem is solved using the Orthogonal Matching Pursuit algorithm together with a discrete cosine or a Gabor dictionary. The performance of this algorithm is shown to be comparable or better than state-of-the-art methods when blocks of samples of variable durations are missing. We also demonstrate that the size of the block of missing samples, rather than the overall number of missing samples, is a crucial parameter for high quality signal restoration. We further introduce a constrained Matching Pursuit approach for the special case of audio declipping that exploits the sign pattern of clipped audio samples and their maximal absolute value, as well as allowing the user to specify the maximum amplitude of the signal. This approach is shown to outperforms state-of-the-art and commercially available methods for audio declipping

Finally here is an announcement for a meeting at the intractability center:

Workshop : Counting, Inference and Optimization on GraphsMay 26, 2011 by admin

Filed under Events, Highlights, News, Workshops

Leave a Comment

Nov Nov

2 5

November 2 – 5, 2011

The aim of this multidisciplinary workshop is to bring together various communities who work on counting, inference, and optimization problems related to graphs. These communities include

- Theoretical computer science

- Information/coding theory

- Statistical physics

- Statistical inference

During this 3.5-day-long workshop at the Princeton Center for Computational Intractability, topics will include holographic algorithms, complexity dichotomy theorems, capacity of constrained coding schemes, graphical models and belief propagation algorithms, and the computation of partition functions in classical and quantum physics.

Credit: NASA, this is one of the most amazing shot I have seen of the international space station. You can see both the Space shuttle, the European ATV and the Russian Soyuz docked to it (you will probably not see this after June) . Other photos taken by Ron an astronaut during one of his extra-vehicular activity can be found here.(via NASA Watch)

## No comments:

Post a Comment