Pages

Friday, May 02, 2008

CS: Inverse scattering by CS, Equivalence between l1 and l0, James Lee's not-a-blog blog, L1 at TAMU, FPGA for CS


I had not seen this even though I am very interested in the subject, but CS seems to be taking off in the area of inverse problems in the electromagnetic radiation area:

Edwin Marengo, Ronald Hernandez, Y.R. Citron, Fred Gruber, M. Zambrano, and Hanoch Lev-Ari introduce us to Compressive sensing for inverse scattering. From the introduction we can read:

Compressive sensing is a new field in signal processing and applied mathematics. It allows one to simultaneously sample and compress signals which are known to have a sparse representation in a known basis or dictionary along with the subsequent recovery by linear programming (requiring polynomial (P) time) of the original signals with low or no error [1–3]. Compressive measurements or samples are non-adaptive, possibly random linear projections of the given signal. Most importantly, sparsity arises in many physical signals, hence this approach is of significant importance. The results in this area apply to biomedical imaging, astronomy, single-pixel photography, and many other disciplines. The present work outlines certain aspects of our ongoing research in this area. Of much interest is the development of new approaches to linear and nonlinear inverse scattering problems [4] that are based on compressive sensing ideas. Past work in compressive sensing has been restricted to linear inverse problems of the form y = Ax where A is a matrix mapping input (object) x to output (data) y. In this linear context, the focus has been to show that despite significant undersampling of the data signal y as projections of the form y_0 = P^T y, where T denotes the complex conjugate transpose and P is a measurement matrix obeying a mild incoherency property, one can for a broad class of sparse object signals x (where most of the entries of x are zero or negligible) still carry out perfect inversions or reconstructions with low error to the compressed inverse problem of inverting x from y0 where y_0 = P^T y = P^TAx. The focus of the present research is to study how these results apply to wave inverse scattering, in both the linear regime of the so-called Born approximation for weakly scattering objects as well as in the more general context of strongly scattering objects exhibiting non-negligible multiple scattering interactions. The derived developments are motivated mostly by detection and imaging applications (e.g., biomedical sensing, nondestructive testing, radar) requiring computationally non-intensive (P time) signal processing with limited wave data about a given event or target of interest. Particular emphasis is given in the following to the framework termed Bayesian compressive sensing [1]. Inspired by this approach, next we derive new inverse scattering approaches based on maximum a posteriori probability (MAP) estimators for the unknown object function (the scattering potential representing the material constitutive properties) which is assumed to arise as a realization of a sparsity-inducing Laplace prior. In the resulting inversion methods, a l1-norm regularizing constraint substitutes the l2-norm constraint that is more typically adopted in inverse theory. This 1-norm constraint is typical of compressive sensing theory [1–3]. Two different frameworks are proposed: The first strategy corresponds to compressive sensing counterparts of conventional iterative Born methods. The second approach focuses on the more restricted goals of scatterer localization (for point targets) and shape reconstruction (for extended scatterers). This focus provides non-iterative approaches that are compressive sensing counterparts of the more familiar maximum likelihood, MUSIC, and other signal-subspace-based methods which have been an active research area of our group in recent years [5–8].
Edwin Marengo has also released the following preprints/talks on the subject:
  1. Compressive sensing and signal subspace methods for inverse scattering including multiple scattering
  2. Subspace and Bayesian compressive sensing methods in imaging
  3. Inverse scattering by compressive sensing and signal subspace methods
In a different area, Yoav Sharon, John Wright and Yi Ma just released Computation and Relaxation of Conditions for Equivalence between l1 and l0 Minimization. The abstract reads:
Abstract: In this paper, we investigate the exact conditions under which the l1 and l0 minimizations arising in the context of sparse error correction or sparse signal reconstruction are equivalent. We present a much simplified condition for verifying equivalence, which leads to a provably correct algorithm that computes the exact sparsity of the error or the signal needed to ensure equivalence. Our algorithm is combinatorial, and for large matrices it can only be used to estimate, for signals with a certain sparsity, the probability that the l1 and l0 minimizations are equivalent. For small matrices, however, it produces the exact result in a reasonably short time, up to 106 times faster than the only other algorithm known for this problem. We present an example application where such small matrices are needed, and for which our algorithm can greatly assist with system design. We also show how, in the case when the encoding matrix is imbalanced an optimal diagonal rescaling matrix can be computed via linear programming, so that the rescaled system enjoys the widest possible equivalence.

On the theoretical side of things:

James Lee (who I mentioned before) has a blog but he is not calling it a blog.Great. In his first post, he gives the reasons for his not-a-blog-dedicated-to -problems-of-importance-to-mathematicians-from-the
-standpoint-of-theoretical-computer-science
-internet-enabled-journal...errrr.. I'll call it a blog:

At the moment, this blog is merely an experiment in mathematical exposition. The focus is on mathematics that arises in theoretical computer science. The idea is to tell mathematicians about what goes on in TCS, as well as to introduce relevant mathematical techniques to theoretical computer scientists at large. Feedback is welcome.The first three posts of this trial will concern:

  1. Planar multi-flows, embeddings, and differentiation
  2. The explicit subspace problem, compressed sensing, and error-correction over the reals
  3. Geometry of the Laplacian on graphs and spectral data analysis

The first two posts are:
  1. Planar multi-flows, L_1 embeddings, and differentiation
  2. The pseudorandom subspace problem

Finally on the theoretical side (but more applied), let's not forget the upcoming conference on Nonlinear Approximation Techniques Using L1 at Texas A&M University (in College Station, Texas) in a week and half from now. So if you are around central Texas, mark your calendar....It'll take place on May 16-18, 2008 in the Blocker Building, Room 120. It is organized by Jean-Luc Guermond, Bojan Popov and Ron DeVore. The current schedule for talks include the following listing:

Friday, May 16

9:00 - 9:50 Tony Chan: TVL1 models for imaging: global optimization and geometric properties: Part I
10:30 - 11:10 Selim Esedoglu: TVL1 models for imaging: global optimization and geometric properties: Part II
11:20 - 12:00 Justin Romberg: Architectures for Compressive Sampling
2:00 - 2:50 Richard Baraniuk: Compressive Learning and Inference
3:30 - 4:10 Yin Zhang: Enhanced Compressive Sensing and More
4:20 - 5:00 Triet Le: Modeling Oscillations

Saturday May 17

9:00 - 9:50 Stanley Osher: Fast Bregman Iteration for Compressive Sensing and Sparse Denoising
10:30 - 11:10 Anna Gilbert: Combining geometry and combinatorics: a unified approach to sparse signal recovery
11:20 – 12:00 Ronald DeVore: Decoders for Compressed Sensing
2:00 – 2:50 Eitan Tadmor: L1 Techniques for Three Problems in PDEs, Numerics and Image Processing
3:30 – 4:10 Gitta Kutyniok: l1-Minimization and the Geometric Separation Problem
4:20 - 5:00 Jared Tanner: The surprising structure of Gaussian point clouds and its implications for signal processing

Sunday, May 18

9:00 - 9:50 Panagiotis Souganidis: Rates of convergence for monotone approximations of viscosity solutions
10:30 - 11:10 Alexander Kurganov: Numerical Methods for Modern Traffic Flow Models
11:20 - 12:00 Jean-Luc Guermond/Bojan Popov: Approximating PDEs in L1
12:10 - 12:50 Alexander Petoukhov: l^1 greedy algorithm for finding solutions of underdetermined linear systems

and also presenting:
Yen-Hsi Richard Tsai (UT Austin), Tom Goldstein (UCLA)

I guess eating popcorn at the presentations would not be Ok. sigh.

On the technology implementation side, Arjun Hari, a student in Ali Akoglu's Reconfigurable Computing Laboratory is working on an FPGA system implementing CS reconstruction. The project is called FPGA Based Image Reconstruction.


Compressed Sensing (CS) theory promises a quantum leap forward in many areas of signal processing. One of its important application areas is medical imaging. However, extremely long reconstruction times make it both impractical for clinical use and problematic for further research. Configurable computers based on Field Programmable Gate Array (FPGA) hardware are capable of accelerating suitable applications by several orders of magnitude. Our objective is to develop a CPU-FPGA integrated engine tailored to the computational characteristics of CS reconstruction theory for medical image reconstruction. We expect that proposed tools will also be portable to other application areas of CS theory.

I would have thought that the main issue would be more on the encoding side of things, but this is interesting.

Credit: NASA/JPL/Space Science Institute, Electrical storm on Saturn as seen by Cassini on March 4, 2008.

No comments:

Post a Comment