Some of you are going to take a little break on Thursday but in case you get bored waiting for the next plane or because it is going to be a slow news day (except if North Korea continues on doing what it just started) then you may want to read the following papers. Enjoy.
First here is some lecture notes entitled Compressed Sensing and Sparse Signal Processing by Wu-Sheng Lu.. I have a put a link to them in the CS course page.
First here is some lecture notes entitled Compressed Sensing and Sparse Signal Processing by Wu-Sheng Lu.. I have a put a link to them in the CS course page.
Just found on arxiv:
A constrained, total-variation minimization algorithm for low-intensity X-ray CT by Emil Y. Sidky, Yuval Duchin, Christer Ullberg, Xiaochuan Pan. The abstract reads:
On HAL:Purpose: We develop an iterative image-reconstruction algorithm for application to low-intensity computed tomography (CT) projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector-bin width.Method: Recovering the resolution on the scale of a detector bin, requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV-minimization, as suggested by compressive sensing. The presented pseudo-code for solving constrained, TV-minimization is designed to yield an accurate solution to this optimization problem within 100 iterations.Results: The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire, to test resolution. The proposed algorithm is compared with filtered back-projection (FBP).Conclusion: The algorithm may have some advantage over FBP in that the resulting noise-level is lowered at equivalent contrast levels of the wire.
Double Sparsity: Towards Blind Estimation of Multiple Channels by Prasad Sudhakar, Simon Arberet and Remi Gribonval. The abstract reads:
We propose a framework for blind multiple filter estimation from convolutive mixtures, exploiting the time-domain sparsity of the mixing filters and the disjointness of the sources in the time-frequency domain. The proposed framework includes two steps: (a) a clustering step, to determine the frequencies where each source is active alone; (b) a filter estimation step, to recover the filter associated to each source from the corresponding incomplete frequency information. We show how to solve the filter estimation step (b) using convex programming, and we explore numerically the factors that drive its performance. Step (a) remains challenging, and we discuss possible strategies that will be studied in future work.
An ALPS View of Sparse Recovery by Volkan Cevher. The abstract reads:
We provide two compressive sensing (CS) recovery algorithms based on iterative hard-thresholding. The algorithms, collectively dubbed as algebraic pursuits (ALPS), exploit the restricted isometry properties of the CS measurement matrix within the algebra of Nesterov’s optimal gradient methods. We theoretically characterize the estimation and convergence guarantees of ALPS for signals that are sparse on ortho-bases as well as on tight-frames. Simulation results demonstrate a great potential for ALPS in terms of phase-transition, noise robustness, and CS reconstruction.
Fast Hard Thresholding with Nesterov’s Gradient Method by Volkan Cevher, Sina Jafarpour. The abstract reads:
Of note besides the "traditional" comparison with the Donoho-Tanner transition, the same DT phase transition with an additional constraint of positivity.We provide an algorithmic framework for structured sparse recovery which unifies combinatorial optimization with the non-smooth convex optimization framework by Nesterov [1, 2]. Our algorithm, dubbed Nesterov iterative hard-thresholding (NIHT), is similar to the algebraic pursuits (ALPS) in [3] in spirit: we use the gradient information in the convex data error objective to navigate over the nonconvex set of structured sparse signals. While ALPS feature a priori estimation and convergence guarantees, we were only able to provide an online estimation guarantee for NIHT (e.g., the guarantees require the algorithm execution). Experiments show however that NIHT can empirically outperform ALPS and other state-of-the-art convex optimization-based algorithms in sparse recovery.
A Tutorial on Sparse Signal Acquisition and Recovery with Graphical Models by Volkan Cevher, Piotr Indyk, Lawrence Carin, Richard G. Baraniuk
Compressive Sensing Meets Game Theory by Sina Jafarpour, Robert E. Schapire, Volkan Cevher. The abstract reads:
We introduce the Multiplicative Update Selector and Estimator (MUSE) algorithm for sparse approximation in underdetermined linear regression problems. Given f = + , the MUSE provably and efficiently finds a k-sparse vector ^ such that k ^ �� fk1 k k1 + O1 pk, for any k-sparse vector , any measurement matrix , and any noise vector . We cast the sparse approximation problem as a zerosum game over a properly chosen new space; this reformulation provides salient computational advantages in recovery. When the measurement matrix provides stable embedding to sparse vectors (the so-called restricted isometry property in compressive sensing), the MUSE also features guarantees on k �� ^ k2. Simulation results demonstrate the scalability and performance of the MUSE in solving sparse approximation problems based on the Dantzig Selector.
Online Performance Guarantees for Sparse Recovery by Raja Giryes and Volkan Cevher. The abstract reads:
A K -sparse vector x RN produces measurements via linear dimensionality reduction as u = Φx + n, where Φ RM N (M < N), and n RM consists of independent and identically distributed, zero mean Gaussian entries with variance σ2. An algorithm, after its execution, determines a vector ˆx that has K-nonzero entries, and satisfies u − Φˆx . How far can ˆx be from x? When the measurement matrix Φ provides stable embedding to 2K- sparse signals (the so-called restricted isometry property), they must be very close. This paper therefore establishes an analytic bound to characterize the distance ˆx − x based on the online meta-information. Our bound improves the pre-run algorithmic recovery guarantees, and is quite useful in exploring various data error and solution sparsity trade-offs. We also evaluate the performance of several sparse recovery algorithms in the context of our bound.
Submodular Dictionary Selection for Sparse Representation by Andreas Krause, Volkan Cevher. The abstract reads:
We develop an efficient learning framework to construct signal dictionaries for sparse representation by selecting the dictionary columns from multiple candidate bases. By sparse, we mean that only a few dictionary elements, compared to the ambient signal dimension, can exactly represent or well-approximate the signals of interest. We formulate both the selection of the dictionary columns and the sparse representation of signals as a joint combinatorial optimization problem. The proposed combinatorial objective maximizes variance reduction over the set of training signals by constraining the size of the dictionary as well as the number of dictionary columns that can be used to represent each signal. We show that if the available dictionary column vectors are incoherent, our objective function satis es approximate submodularity. We exploit this property to develop SDSOMP and SDSMA, two greedy algorithms with approximation guarantees. We also describe how our learning framework enables dictionary selection for structured sparse representations, e.g., where the sparse coefficients occur in restricted patterns. We evaluate our approach on synthetic signals and natural images for representation and inpainting problems.
Distributed Bearing Estimation via Matrix Completion by Andrew Waters and Volkan Cevher. The abstract reads:
We consider bearing estimation of multiple narrow-band plane waves impinging on an array of sensors. For this problem, bearing estimation algorithms such as minimum variance distortionless response (MVDR), multiple signal classification, and maximum likelihood generally require the array covariance matrix as sufficient statistics. Interestingly, the rank of the array covariance matrix is approximately equal to the number of the sources, which is typically much smaller than the number of sensors in many practical scenarios. In these scenarios, the covariance matrix is low-rank and can be estimated via matrix completion from only a small subset of its entries. We propose a distributed matrix completion framework to drastically reduce the inter-sensor communication in a network while still achieving near-optimal bearing estimation accuracy. Using recent results in noisy matrix completion, we provide sampling bounds and show how the additive noise at the sensor observations affects the reconstruction performance. We demonstrate via simulations that our approach sports desirable tradeoffs between communication costs and bearing estimation accuracy.
No comments:
Post a Comment