Wednesday, June 08, 2011

Compressive Sensing Literature this week Part 2

We have a strong entry today in compressive sensing and related field. I'll come back to some of these elements later. Here they are in the meantime:

Jim Fowler just sent me the following conference paper: Multiscale Block Compressed Sensing with Smoothed Projected Landweber Reconstruction by James E. Fowler, S. Mun, and Eric Tramel. The abstract reads:
A multiscale variant of the block compressed sensing with smoothed projected Landweber reconstruction algorithm is proposed for the compressed sensing of images. In essence, block-based compressed-sensing sampling is deployed independently within each subband of each decomposition level of a wavelet transform of an image. The corresponding multiscale reconstruction interleaves Landweber steps on the individual blocks with a smoothing filter in the spatial domain of the image as well as thresholding within a sparsity transform. Experimental results reveal that the proposed multiscale reconstruction preserves the fast computation associated with block-based compressed sensing while rivaling the reconstruction quality of a popular total-variation algorithm known for both its high-quality reconstruction as well as its exceedingly large computational cost.

The source code is here. Thanks Jim.

Since 2004, the field of compressed sensing has grown quickly and seen tremendous interest because it provides a theoretically sound and computationally tractable method to stably recover signals by sampling at the information rate. This thesis presents in detail the design of one of the world's first compressed sensing hardware devices, the random modulation pre-integrator (RMPI). The RMPI is an analog-to-digital converter (ADC) that bypasses a current limitation in ADC technology and achieves an unprecedented 8 effective number of bits over a bandwidth of 2.5 GHz. Subtle but important design considerations are discussed, and state-of-the-art reconstruction techniques are presented. Inspired by the need for a fast method to solve reconstruction problems for the RMPI, we develop two efficient large-scale optimization methods, NESTA and TFOCS, that are applicable to a wide range of other problems, such as image denoising and deblurring, MRI reconstruction, and matrix completion (including the famous Netflix problem). While many algorithms solve unconstrained $\ell_1$ problems, NESTA and TFOCS can solve the constrained form of $\ell_1$ inimization, and allow weighted norms. In addition to $\ell_1$ minimization problems such as the LASSO, both NESTA and TFOCS solve total-variation minimization problem. TFOCS also solves the Dantzig selector and most variants of the nuclear norm minimization problem. A common theme in both NESTA and TFOCS is the use of smoothing techniques, which make the problem tractable, and the use of optimal first-order methods that have an accelerated convergence rate yet have the same cost per iteration as gradient descent. The conic dual methodology is introduced in TFOCS and proves to be extremely flexible, covering such generic problems as linear programming, quadratic programming, and semi-definite programming. A novel continuation scheme is presented, and it is shown that the Dantzig selector benefits from an exact-penalty property. Both NESTA and TFOCS are released as software packages available freely for academic use.



I think this is the first time I see this lemma and I wonder how I can apply it directly to a specific problem where I know the eigenfunction of the null space of an operator.

We consider a class of sparse learning problems in high dimensional feature space regularized by a structured sparsity-inducing norm which incorporates prior knowledge of the group structure of the features. Such problems often pose a considerable challenge to optimization algorithms due to the non-smoothness and non-separability of the regularization term. In this paper, we focus on two commonly adopted sparsity-inducing regularization terms, the overlapping Group Lasso penalty $l_1/l_2$-norm and the $l_1/l_\infty$-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As the core building-block of this framework, we develop new algorithms using an alternating partial-linearization/splitting technique, and we prove that the accelerated versions of these algorithms require $O(\frac{1}{\sqrt{\epsilon}})$ iterations to obtain an $\epsilon$-optimal solution. To demonstrate the efficiency and relevance of our algorithms, we test them on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms.

Fast Multidimensional NMR Spectroscopy Using Compressed Sensing by Dr. Daniel J. Holland, Mark J. Bostock, Prof. Dr. Lynn F. Gladden, Dr. Daniel Nietlispach. The abstract reads:
Make it snappy! The use of compressed sensing to reconstruct multidimensional NMR spectra enables significant reductions in recording time. Thus, 3D HNCA (blue) and HN(CO)CA spectra (green) of sufficient quality for rapid protein-backbone assignment were reconstructed from only 16 % of the fully sampled data. The generality of the method and its robustness to noise should make it more broadly applicable, for example, to solid-state NMR spectroscopy.
Re-designing the camera for computational photography by Roarke Horstmeyer. The introduction reads:
Modified optical setups enable new optical imaging functionalities, including the ability to capture depth, varied angular perspectives, and multispectral content.


Iterative aperture mask design in phase space using a rank constraint by Roarke Horstmeyer, Se Baek Oh, and Ramesh Raskar. The abstract reads:
We present an iterative camera aperture design procedure, which determines an optimal mask pattern based on a sparse set of desired intensity distributions at different focal depths. This iterative method uses the ambiguity function as a tool to shape the camera’s response to defocus, and shares conceptual similarities with phase retrieval procedures. An analysis of algorithm convergence is presented, and experimental examples are shown to demonstrate the flexibility of the design process. This algorithm potentially ties together previous disjointed PSF design approaches under a common framework, and offers new insights for the creation of future application-specific imaging systems.


The ambiguity function (AF) provides a convenient way to model how a camera with a modified aperture responds to defocus. We use the AF to design an optimal aperture distribution, which creates a depth-variant point spread function (PSF) from a sparse set of desired intensity patterns at different focal depths. Prior knowledge of the coherence state of the light is used to constrain the optimization in the mutual intensity domain. We use an assumption of spatially coherent light to design a fixed-pattern aperture mask. The concept of a dynamic aperture mask that displays several aperture patterns during one image exposure is also suggested, which is modeled under an assumption of partially coherent light. Parallels are drawn between the optimal aperture functions for this dynamic mask and the eigenmodes of
a coherent mode decomposition. We demonstrate how the space of design for a 3D intensity distribution of light using partially coherent assumptions is less constrained than under coherent light assumptions.


Highlighted Depth-of-Field Photography: Shining Light on Focus by Jaewon KimRoarke Horstmeyer, Ig-Jae Kim and Ramesh Raskar. The abstract reads:
We present a photographic method to enhance intensity differences between objects at varying distances from the focal plane. By combining a unique capture procedure with simple image processing techniques, the detected brightness of an object is decreased proportional to its degree of defocus. A camera-projector system casts distinct grid patterns onto a scene to generate a spatial distribution of point reflections. These point reflections relay a relative measure of defocus that is utilized in postprocessing to generate a highlighted DOF photograph. Trade-offs between three different projectorprocessing pairs are analyzed, and a model is developed to help describe a new intensity-dependent depth of field that is controlled by the pattern of illumination. Results are presented for a primary single snapshot design as
well as a scanning method and a comparison method. As an application, automatic matting results are presented.

No comments:

Printfriendly