Following up on yesterday's 5D camera here is the paper and video introducing the subject: Looking Around the Corner using Transient Imaging by Ahmed Kirmani, Tyler Hutchison, James Davis, and Ramesh Raskar. The abstract reads:
We show that multi-path analysis using images from a timeof-flight (ToF) camera provides a tantalizing opportunity to infer about 3D geometry of not only visible but hidden parts of a scene. We provide a novel framework for reconstructing scene geometry from a single viewpoint using a camera that captures a 3D time-image I(x, y, t) for each pixel. We propose a framework that uses the time-image and transient reasoning to expose scene properties that may be beyond the reach of traditional computer vision. We corroborate our theory with free space hardware experiments using a femtosecond laser and an ultrafast photo detector array. The ability to compute the geometry of hidden elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.
We demonstrate depth-resolved computational ghost imaging using a single single-pixel detector and a spatially modulated beam. We further develop an advanced reconstruction algorithm based on compressive-sensing, demonstrating a 10-fold reduction in image acquisition time.
What I gathered from Ori's presentation was that compressed sensing could have removed entirely a sterile discussion between specialists on whether ghost imaging was a purely quantum effect or not (it's not since CS provides a classical explanation to the experiment). A good half of the rest of the papers presented below are behind a paywall, enjoy:
Lagrange Programming Neural Networks for Compressive Sampling by Ping-Man Lam, Chi Sing Leung, John Sum and A. G. Constantinides. The abstract reads:
Compressive sampling is a sampling technique for sparse signals. The advantage of compressive sampling is that signals are compactly represented by a few number of measured values. This paper adopts an analog neural network technique, Lagrange programming neural networks (LPNNs), to recover data in compressive sampling. We propose the LPNN dynamics to handle three sceneries, including the standard recovery of sparse signal, the recovery of non-sparse signal, and the noisy measurement values, in compressive sampling. Simulation examples demonstrate that our approach effectively recovers the signals from the measured values for both noise free and noisy environment.
The Sample Complexity of Dictionary Learning by Daniel Vainsencher, Shie Mannor, Alfred M. Bruckstein. The abstract reads:
A large set of signals can sometimes be described sparsely using a dictionary, that is, every element can be represented as a linear combination of few elements from the dictionary. Algorithms for various signal processing applications, including classification, denoising and signal separation, learn a dictionary from a set of signals to be represented. Can we expect that the representation found by such a dictionary for a previously unseen example from the same source will have L_2 error of the same magnitude as those for the given examples? We assume signals are generated from a fixed distribution, and study this questions from a statistical learning theory perspective. We develop generalization bounds on the quality of the learned dictionary for two types of constraints on the coefficient selection, as measured by the expected L_2 error in representation when the dictionary is used. For the case of l_1 regularized coefficient selection we provide a generalization bound of the order of O(sqrt(np log(m lambda)/m)), where n is the dimension, p is the number of elements in the dictionary, lambda is a bound on the l_1 norm of the coefficient vector and m is the number of samples, which complements existing results. For the case of representing a new signal as a combination of at most k dictionary elements, we provide a bound of the order O(sqrt(np log(m k)/m)) under an assumption on the level of orthogonality of the dictionary (low Babel function). We further show that this assumption holds for most dictionaries in high dimensions in a strong probabilistic sense. Our results further yield fast rates of order 1/m as opposed to 1/sqrt(m) using localized Rademacher complexity. We provide similar results in a general setting using kernels with weak smoothness requirements.
Toward a near real-time system for event hypocenter and source mechanism recovery via compressive sensing by Ismael Vera Rodriguez, Mauricio D. Sacchi, and Yu J. Gu. The abstract reads:
The simultaneous inversion for source location and mechanism of microseismic events is a significant problem in the fields of applied geophysics and petroleum engineering. Methodologies for real-time source moment tensor estimation may become of significant importance for the monitoring of hydraulic fractures. We examine an algorithm for a quasi-real time implementation of source location estimation and seismic moment tensor inversion of microseismic events. The algorithm requires the inversion of a dictionary of Green functions containing the multi-component response of each potential seismic source in the subsurface. The problem entails the inversion of a prohibitively large matrix that, at a first glance, does not appear amenable to a real-time implementation. Compressive sensing, however, can be used to reduce the size of the dictionary of Green functions to up to 90% of its original size, making a near real-time execution computationally tractable. The algorithm is validated with a small magnitude earthquake (18 June 2002, Caborn, Indiana) with a well-studied source mechanism and hypocenter location.
Lensfree color imaging on a nanostructured chip using compressive decoding by Bahar Khademhosseinieh, Gabriel Biener, Ikbal Sencan, and Aydogan Ozcan. The abstract reads:
We demonstrate subpixel level color imaging capability on a lensfree incoherent on-chip microscopy platform. By using a nanostructured substrate, the incoherent emission from the object plane is modulated to create a unique far-field diffraction pattern corresponding to each point at the object plane. These lensfree diffraction patterns are then sampled in the far-field using a color sensor-array, where the pixels have three different types of color filters at red, green, and blue (RGB) wavelengths. The recorded RGB diffraction patterns (for each point on the structured substrate) form a basis that can be used to rapidly reconstruct any arbitrary multicolor incoherent object distribution at subpixel resolution, using a compressive sampling algorithm. This lensfree computational imaging platform could be quite useful to create a compact fluorescent on-chip microscope that has color imaging capability.
Generalized Tree-Based Wavelet Transform by Idan Ram, Michael Elad, and Israel Cohen. The abstract reads:
In this paper we propose a new wavelet transform applicable to functions defined on graphs, high dimensional data and networks. The proposed method generalizes the Haar-like transform proposed in [1], and it is similarly defined via a hierarchical tree, which is assumed to capture the geometry and structure of the input data. It is applied to the data using a multiscale filtering and decimation scheme, which can employ different wavelet filters. We propose a tree construction method which results in efficient representation of the input function in the transform domain. We show that the proposed transform is more efficient than both the one-dimensional (1D) and twodimensional (2D) separable wavelet transforms in representing images.We also explore the application of the proposed transform to image denoising, and show that combined with a subimage averaging scheme, it achieves denoising results which are similar to the ones obtained with the K-SVD algorithm.
Derivative Compressive Sampling with Application to phase unwrapping by Mahdi S. Hosseini and Oleg V. Michailovich. The abstract reads:
Reconstruction of multidimensional signals from the samples of their partial derivatives is known to be an important problem in imaging sciences, with its fields of application including optics, interferometry, computer vision, and remote sensing, just to name a few. Due to the nature of the derivative operator, the above reconstruction problem is generally regarded as ill-posed, which suggests the necessity of using some a priori constraints to render its solution unique and stable. The ill-posed nature of the problem, however, becomes much more conspicuous when the set of data derivatives occurs to be incomplete. In this case, a plausible solution to the problem seems to be provided by the theory of compressive sampling, which looks for solutions that fit the measurements on one hand, and have the sparsest possible representation in a predefined basis, on the other hand. One of the most important questions to be addressed in such a case would be of how much incomplete the data is allowed to be for the reconstruction to remain useful. With this question in mind, the present note proposes a way to augment the standard constraints of compressive sampling by additional constraints related to some natural properties of the partial derivatives. It is shown that the resulting scheme of derivative compressive sampling (DCS) is capable of reliably recovering the signals of interest from much fewer data samples as compared to the standard CS. As an example application, the problem of phase unwrapping is discussed.
Compressive sensing: From "Compressing while Sampling" to "Compressing and Securing while Sampling".
by Amir Abdulghan, Esther Rodriguez-Villegas. The abstract reads:
by Amir Abdulghan, Esther Rodriguez-Villegas. The abstract reads:
In a traditional signal processing system sampling is carried out at a frequency which is at least twice the highest frequency component found in the signal. This is in order to guarantee that complete signal recovery is later on possible. The sampled signal can subsequently be subjected to further processing leading to, for example, encryption and compression. This processing can be computationally intensive and, in the case of battery operated systems, unpractically power hungry. Compressive sensing has recently emerged as a new signal sampling paradigm gaining huge attention from the research community. According to this theory it can potentially be possible to sample certain signals at a lower than Nyquist rate without jeopardizing signal recovery. In practical terms this may provide multi-pronged solutions to reduce some systems computational complexity. In this work, information theoretic analysis of real EEG signals is presented that shows the additional benefits of compressive sensing in preserving data privacy. Through this it can then be established generally that compressive sensing not only compresses but also secures while sampling.
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI. by Yu Y, Hong M, Liu F, Wang H, Crozier S.. The abstract reads:
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.
Bounds from a Card Trick by Travis Gagie. The abstract reads:
We describe a new variation of a mathematical card trick, whose analysis leads to new lower bounds for data compression and estimating the entropy of Markov sources.
Giuseppe Paleologo pointed out to me to this paper: Trace Optimization and Eigenproblems in Dimension Reduction Methods by Effrosyni Kokiopoulou, J. Chen and Yousef Saad. The abstract reads:
This paper gives an overview of the eigenvalue problems encountered in areas of data mining that are related to dimension reduction. Given some input high-dimensional data, the goal of dimension reduction is to map them to a lowdimensional space such that certain properties of the initial data are preserved. Optimizing the above properties among the reduced data can be typically posed as a trace optimization problem that leads to an eigenvalue problem. There is a rich variety of such problems and the goal of this paper is to unravel relations between them as well as to discuss effective solution techniques. First, we make a distinction between projective methods that determine an explicit linear projection from the high-dimensional space to the low-dimensional space, and nonlinear methods where the mapping between the two is nonlinear
and implicit. Then, we show that all of the eigenvalue problems solved in the context of explicit projections can be viewed as the projected analogues of the so-called nonlinear or implicit projections. We also discuss kernels as a means of unifying both types of methods and revisit some of the equivalences between methods established in this way. Finally, we provide some illustrative examples to showcase the behavior and the particular characteristics of the various dimension reduction methods on real world data sets.
I could not but remember a previous entry on a similar subject: Trace: What is it good for ? - How Compressed Sensing relates to Dimensionality Reduction
Finally, I found the following LMS Invited Lecturer 2011
Emmanuel Candes, Stanford
Centre for Mathematical Sciences, Cambridge, March 21-25 2011
Emmanuel Candes will give an eight-lecture minicourse, at a level suitable for graduate students, on Compressed Sensing. This is a subject very much at the interface of pure and applied mathematics and the lectures should interest a wide audience.
There will also be one-hour lectures by: Anders Hansen (Cambridge), Vincent Rivoirard (Paris-Dauphine), Carola Schoenlieb (Cambridge) and Jared Tanner (Edinburgh).
College accommodation will be available. Funding is available for Research students from UK universities and a limited amount of funding is available for others. Please email lms2011@maths.cam.ac.uk for more details.