Pages

Wednesday, January 28, 2009

CS: Compressive dual photography, Image reconstruction by deterministic CS, Nesterov algorithm

I found the following paper at the Rice repository:


Pradeep Sen and Soheil Darabi, Compressive dual photography. (Computer Graphics Forum, March 2009)

The accurate measurement of the light transport characteristics of a complex scene is an important goal in computer graphics and has applications in relighting and dual photography. However, since the light transport data sets are typically very large, much of the previous research has focused on adaptive algorithms that capture them efficiently. In this work, we propose a novel, non-adaptive algorithm that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches. To do this, we leverage recent work in the area of compressed sensing, where a signal is reconstructed from a few samples assuming that it is sparse in a transform domain. We demonstrate our approach by performing dual photography and relighting by using a much smaller number of acquisitions than would normally be needed. Because our algorithm is not adaptive, it is also simpler to implement than many of the current approaches.

It looks like an extension of the illumination work performed earlier at Rice and University of Arizona. I will add this project to the Compressive Sensing Hardware page. The following file is in postscript and was found thanks to the Google :-)



Image reconstruction by deterministic compressive sensing by Kangyu Ni, Somantika Datta, Svetlana Roudenko, Douglas Cochran. The abstract reads:
A recently proposed approach for compressive sensing with deterministic measurement matrices is applied to images that possess varying degrees of sparsity in their wavelet representations. The use of these deterministic measurement matrices is found to be approximately as effective as the use of Gaussian random matrices in terms of image reconstruction fidelity. The ``fast reconstruction'' algorithm enabled by this deterministic sampling scheme produces accurate results, but its speed is hampered when the degree of sparsity is not sufficiently high.

Finally, as Gabriel Peyre mentioned to me, it looks like the algorithm developed using the Nesterov scheme by Pierre Weiss is fast. So here is Pierre Weiss' thesis entitled: Fast algorithms for convex optimization. Applications to image reconstruction and change detection. The abstract of which is:

This PhD contains contributions in numerical analysis and in computer vision. In the first part, we focus on the fast resolution, using first order methods, of convex optimization problems. Those problems appear naturally in many image processing tasks like image reconstruction, compressed sensing or texture+cartoon decompositions. They are generally non differentiable or ill-conditioned. We show that they can be solved very efficiently using fine properties of the functions to be minimized. We analyze in a systematic way their convergence rate using recent results due to Y. Nesterov. To our knowledge, the proposed methods correspond to the state of the art of the first order methods. In the second part, we focus on the problem of change detection between two remotely sensed images taken from the same location at two different times. One of the main difficulty to solve this problem is the differences in the illumination conditions between the two shots. This leads us to study the level line illumination invariance. We completely characterize the 3D scenes which produce invariant level lines. We show that they correspond quite well to urban scenes. Then we propose a variational framework and a simple change detection algorithm which gives satisfying results both on synthetic OpenGL scenes and real Quickbird images.


Credit: NASA, Opportunity, navigation camera, sol 1776.

No comments:

Post a Comment