Pages

Tuesday, September 16, 2008

CS: A video, Two talks, A Randomized Algorithm for PCA


I mentioned it before but the impressive results of Compressive Structured Light for Recovering Inhomogeneous Participating Media by Jinwei Gu, Shree Nayar, Eitan Grinspun, Peter Belhumeur, and Ravi Ramamoorthi is presented very nicely in a video located here. It was added to the video section of the CS pages.



Thomas Blumensath and Mike Davies have a revised version of "Sampling Theorems for Signals from the Union of Linear Subspaces".

Vladimir Rokhlin
, Arthur Szlam, and Mark Tygert just released A Randomized Algorithm for Principal Component Analysis. The abstract reads:
Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, existing efficient algorithms do not guarantee good accuracy for the approximations they produce, unless one or both dimensions of the matrix being approximated are small. We describe an efficient algorithm for the low-rank approximation of matrices that produces accuracy very close to the best possible, for matrices of arbitrary sizes. We illustrate our theoretical results via several numerical examples.

There are two upcoming talks listed in the CS Calendar:

Applied Math Seminar at the Computer Science Department at Yale.
Speaker: Shai Dekel, Ph.D., Chief Scientist, Imaging Solutions, GE Healthcare IT
Titles: "Adaptive compressed image sensing based on wavelet-trees" and "On the equivalence of the modulus of smoothness and the K-functional over convex domains".

When/where: Thursday, September 18th, 2008, 3:30PM, Room 500 AKW

Abstract:
In this talk I will give two 30 minutes talks presenting the above recent results. The first is a compressed sensing algorithm that actually works on large images (to the best of my knowledge, all known CS algorithms 'choke' on images larger than 512x512). Then second topic is a classic problem in approximation theory. The abstract is here.



On September 15, 2008, a talk by Rafael Carrillo entitled: Robust Sampling and Reconstruction Methods for Sparse Signals in the Presence of Impulsive Noise at University of Delaware at 11:15 AM, Evans Hall, Room 204.

Abstract of the talk:

Recent results in compressed sensing have shown that a sparse or compressible signal can be reconstructed from a few incoherent measurements with the reconstruction formulated as an optimization problem or implemented through iterative algorithms. Compressive sensing systems are not immune to noise, which is always present in practical acquisition systems. When the underlying signal is corrupted by heavy tailed or impulsive noise, commonly employed linear measurements are severely affected, as sampling operators coupled with current geometric and greedy reconstruction algorithms fail to recover a fair approximation of the signal. Similarly, when the sensed measurements are corrupted with such noise, current approaches also fail to yield faithful estimates of the original signal. In this work, we propose robust methods for sampling and reconstructing signals in the presence of impulsive noise. To solve the problem of noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad filter family. Myriad-based measurements offer robustness in impulsive environments while, at the same time, enabling use of standard reconstruction algorithms derived for linear measurements. To recover sparse signals from noisy measurements, we propose a geometric reconstruction algorithm based on L1 minimization employing a nonlinear constraint. Specifically, a Lorentzian norm constraint on the error measure defines the reconstruction feasible set. Analysis of the proposed methods show that even in harsh environments when the noise possesses infinite variance we have a finite reconstruction error and furthermore these methods yield an approximate reconstruction. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing and reconstruction techniques in heavy-tailed environments, while providing comparable performance in less demanding, light-tailed environments.



View Larger Map

view of Gilchirst, Texas before Huricane Ike.

Credit Photo: NOAA, photos taken by an NOAA aircraft after Hurricane Ike. This is a view of Gilchrist, Texas.

2 comments:

  1. Igor, I think that the abstract you put for "A Randomized Algorithm for Principal Component Analysis" is the wrong one.

    It is actually the abstract of "Cryo-EM Structure Determination through
    Eigenvectors of Sparse Matrices" by R. Coifman, Y. Shkolnisky, F. J. Sigworth and A. Singer: www.cs.yale.edu/homes/ys264/preprints/CryoEM_Technical_ReportV6.pdf

    Laurent

    ReplyDelete
  2. Laurent,

    Thanks. I wanted to feature them both. The PCA randomized algorithm is of interest for the random aspect of it, the Cryo structure one is of interest because of the use of random projections.
    Let me correct that, thanks for the eagle eye.

    Igor.

    ReplyDelete