I asked a question to the Numerical Analysis E-mail list about the name of the algorithm mentioned on Monday (Monday Morning Algorithm 17). I was surprised to hear back that it looked like what's called "finite difference" (different from the technique used to discretize PDEs) or "sparse finite differencing". Except that in these techniques dating back to the early 1970's the sparsity pattern needs to be known. In our case, obviously that sparsity pattern is not known which makes the algorithm all the more interesting. More on that later....
Today, we two preprints from Arxiv, a thesis and two papers:
Efficient estimation of many-body quantum Hamiltonians via random measurements by Alireza Shabani, Masoud Mohseni, Seth Lloyd, Robert Kosut, Herschel Rabitz. The abstract reads:
Complete information about the Hamiltonian of a quantum system is sufficient to predict its future behavior with arbitrary accuracy. There is presently no method for efficient estimation of Hamiltonian parameters due to the exponentially rising number of the required experimental configurations with respect to the system degrees of freedom and inherent nonlinear dependence of measurement outcomes on the Hamiltonian parameters. Here, we develop a compressed sensing method for scalable Hamiltonian identification of multipartite quantum systems. We demonstrate that with only O(s log(d)) random measurement settings one can estimate the Hamiltonian of a d-dimensional system, provided that the Hamiltonian is known to be almost s-sparse in a basis. This approach is robust to noise and experimental imperfections in data extraction. We numerically simulate the performance of this algorithm for three- and four-body interactions in spin-coupled quantum dots and atoms in optical lattices. Furthermore, we apply the algorithm to characterize Hamiltonian fine structure and system-bath couplings in open quantum systems.
Manifold-Based Signal Recovery and Parameter Estimation from Compressive Measurements by Michael B. Wakin. The abstract reads:
A field known as Compressive Sensing (CS) has recently emerged to help address the growing challenges of capturing and processing high-dimensional signals and data sets. CS exploits the surprising fact that the information contained in a sparse signal can be preserved in a small number of compressive (or random) linear measurements of that signal. Strong theoretical guarantees have been established on the accuracy to which sparse or near-sparse signals can be recovered from noisy compressive measurements. In this paper, we address similar questions in the context of a different modeling framework. Instead of sparse models, we focus on the broad class of manifold models, which can arise in both parametric and non-parametric signal families. Building upon recent results concerning the stable embeddings of manifolds within the measurement space, we establish both deterministic and probabilistic instance-optimal bounds in $\ell_2$ for manifold-based signal recovery and parameter estimation from noisy compressive measurements. In line with analogous results for sparsity-based CS, we conclude that much stronger bounds are possible in the probabilistic setting. Our work supports the growing empirical evidence that manifold-based models can be used with high accuracy in compressive signal processing.I found the following thanks to the Google: Compressed Sensing with off-axis, frequency-shifting holography by Marcio M. Marim, Michael Atlan, Elsa Angelini and Jean-Christophe Olivo-Marin. The abstract reads:
This work reveals an experiment microscopy acquisition scheme successfully combining Compressed Sensing (CS) and digital holography in off-axis and frequency-shifting conditions. CS is a recent data acquisition theory involving signal reconstruction from randomly undersampled measurements, exploiting the fact that most images present some compact structure and redundancy. We propose a genuine CS-based imaging scheme for sparse gradient images, acquiring a diffraction map of the optical field with holographic microscopy and reconvering the signal as little as 7% of random measurements. We report experiment results demonstrating how CS can lead to an elegant and effective way to reconstruct images, opening the door for new microscopy applications.There is also a thesis defended last summer entitled: Optimal spectral reconstructions from deterministic and stochastic sampling geometries using compressive sensing and spectral statistical models by Oliver Jeromin. The abstract of the dissertation reads:
This dissertation focuses on the development of high-quality image reconstruction methods from a limited number of Fourier samples using optimized, stochastic and deterministic sampling geometries. Two methodologies are developed: an optimal image reconstruction framework based on Compressive Sensing (CS) techniques and a new, Spectral Statistical approach based on the use of isotropic models over a dyadic partitioning of the spectrum. The proposed methods are demonstrated in applications in reconstructing fMRI and remote sensing imagery. Typically, a reduction in MRI image acquisition time is achieved by sampling K-space at a rate below the Nyquist rate. Various methods using correlation between samples, sample averaging, and more recently, Compressive Sensing, are employed to mitigate the aliasing effects of under-sampled Fourier data. The proposed solution utilizes an additional layer of optimization to enhance the performance of a previously published CS reconstruction algorithm. Specifically, the new framework provides reconstructions of a desired image quality by jointly optimizing for the optimal K-space sampling geometry and CS model parameters. The effectiveness of each geometry is evaluated based on the required number of FFT samples that are available for image reconstructions of sufficient quality. A central result of this approach is that the fastest geometry, the spiral low-pass geometry has also provided the best (optimized) CS reconstructions. This geometry provided significantly better reconstructions than the stochastic sampling geometries recommended in the literature. An optimization framework for selecting appropriate CS model reconstruction parameters is also provided. Here, the term “appropriate CS parameters” is meant to infer that the estimated parameter ranges can provide some guarantee for a minimum level of image reconstruction performance. Utilizing the simplex search algorithm, the optimal TV-norm and Wavelet transform penalties are calculated for the CS reconstruction objective function. Collecting the functional evaluation values of the simplex search over a large data set allows for a range of objective function weighting parameters to be defined for the sampling geometries that were found to be effective. The results indicate that the CS parameter optimization framework is significant in that it can provide for large improvements over the standard use of non-optimized approaches. The dissertation also develops the use of a new Spectral Statistical approach for spectral reconstruction of remote sensing imagery. The motivation for pursuing this research includes potential applications that include, but are not limited to, the development of better image compression schemas based on a limited number of spectral coefficients. In addition, other applications include the use of spectral interpolation methods for remote sensing systems that directly sample the Fourier domain optically or electromagnetically, which may suffer from missing or degraded samples beyond and/or within the focal plane. For these applications, a new spectral statistical methodology is proposed that reconstructs spectral data from uniformly spaced samples over a dyadic partition of the spectrum. Unlike the CS approach that solves for the 2D FFT coefficients directly, the statistical approach uses separate models for the magnitude and phase, allowing for separate control of the reconstruction quality of each one. A scalable solution that partitions the spectral domain into blocks of varying size allows for the determination of the appropriate covariance models of the magnitude and phase spectra bounded by the blocks. The individual spectral models are then applied to solving for the optimal linear estimate, which is referred to in literature as Kriging. The use of spectral data transformations are also presented as a means for producing data that is better suited for statistical modeling and variogram estimation. A logarithmic transformation is applied to the magnitude spectra, as it has been shown to impart intrinsic stationarity over localized, bounded regions of the spectra. Phase spectra resulting from the 2D FFT can be best described as being uniformly distributed over the interval of -pi to pi. In this original state, the spectral samples fail to produce appropriate spectral statistical models that exhibit inter-sample covariance. For phase spectra modeling, an unwrapping step is required to ensure that individual blocks can be effectively modeled using appropriate variogram models. The transformed magnitude and unwrapped phase spectra result in unique statistical models that are optimal over individual frequency blocks, which produce accurate spectral reconstructions that account for localized variability in the spectral domain. The Kriging spectral estimates are shown to produce higher quality magnitude and phase spectra reconstructions than the cubic spline, nearest neighbor, and bilinear interpolators that are widely used. Even when model assumptions, such as isotropy, violate the spectral data being modeled, excellent reconstructions are still obtained. Finally, both of the spectral estimation methods developed in this dissertation are compared against one another, revealing how each one of the methods developed here is appropriate for different classes of images. For satellite images that contain a large amount of detail, the new spectral statistical approach, reconstructing the spectrum much faster, from a fraction of the original high frequency content, provided significantly better reconstructions than the best reconstructions from the optimized CS geometries. This result is supported not only by comparing image quality metrics, but also by visual assessment.
and finally: Compressive video sensors using multichannel imagers by Mohan Shankar, Nikos Pitsianis, and David Brady. The abstract reads:
We explore the possibilities of obtaining compression in video through modifiliged sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.
Credit: NASA/Jim Grossmann, STS-130, the last time you will ever see a Shuttle launch at night. Via the Planetary Society blog.
Currently Shankar, Pitsianis, and Brady have a lock on "modifiliged" sampling. They should hold onto that spelling as a keyword for their technique.
ReplyDelete