We present a technique to construct increased-resolution images from multiple photos taken without moving the camera or the sensor. Like other super-resolution techniques, we capture and merge multiple images, but instead of moving the camera sensor by sub-pixel distances for each image, we change masks in the lens aperture and slightly defocus the lens. The resulting capture system is simpler, and tolerates modest mask registration errors well. We present a theoretical analysis of the camera and image merging method, show both simulated results and actual results from a crudely modified consumer camera, and compare its results to robust ‘blind’ methods that rely on uncontrolled camera displacements.
Talking about coded aperture and following up on the interest of the astronomy community to look deeper in Compressed Sensing, the ADA conference series organized by ESA is opening to it as can be seen in their welcoming opening statement.
Held regularly since 2001, the ADA conference series is focused on algorithms, signal and data processing. The program includes invited and contributed talks as well as posters. This conference series has been characterized by a range of innovative themes, including curvelet transforms, and clustering in cosmology, while at the same time remaining closely linked to front-line open problems and issues in astrophysics and cosmology. Herschel and PLANCK will be launched in 2008, and so ADA-V will focus in particular on:
- inverse problems such as map-making and component separation
- multi-wavelength data analysis
Recent developments in harmonic analysis, especially the "compressed sensing" theory may have major impact on the way we collect, transfer and analyze the data. The ADA-V conference will have an invited expert from this field in order to diffuse these new ideas in the astronomical community.
The invited expert is Emmanuel Candes. Making a presentation in Crete must be bad for jetlag but somebody's got to do it.
From Arxiv.org, I found the following: Estimating Signals with Finite Rate of Innovation from Noisy Samples: A Stochastic Algorithm by Vincent Yan Fu Tan, Vivek Goyal. The abstract reads:
As an example of the recently-introduced concept of rate of innovation, signals that are linear combinations of a finite number of Diracs per unit time can be acquired by linear filtering followed by uniform sampling. However, in reality, samples are rarely noiseless. In this paper, we introduce a novel stochastic algorithm to reconstruct a signal with finite rate of innovation from its noisy samples. Even though variants of this problem has been approached previously, satisfactory solutions are only available for certain classes of sampling kernels, for example kernels which satisfy the Strang–Fix condition. In this paper, we consider the infinite-support Gaussian kernel, which does not satisfy the Strang–Fix condition. Other classes of kernels can be employed. Our algorithm is based on Gibbs sampling, a Markov chain Monte Carlo (MCMC) method. Extensive numerical simulations demonstrate the accuracy and robustness of our algorithm.I also found this actual use of CS in Terahertz imaging by some fo the folks at Rice: Terahertz Imaging with Compressed Sensing and Phase Retrieval by Wai Lam Chan, Matthew Moravec, Richard Baraniuk, and Daniel Mittleman. The abstract reads:
We describe in this paper a novel, high-speed pulsed terahertz (THz) Fourier imaging system based on compressed sensing (CS), a new signal processing theory which allows image reconstruction with fewer samples than traditionally required. Using CS, we successfully reconstruct a 64 × 64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which define the image in the Fourier plane, and observe improved reconstruction quality when we apply phase correction. For our chosen image, only about 12% of the pixels are required for reassembling the image. In combination with phase retrieval, our system has the capability to reconstruct images with only a small subset of Fourier amplitude measurements, and thus has potential application in THz imaging with continuous-wave (CW) sources.
The first one I could not find the paper, just the PubMed reference: Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets by Guang-Hong Chen, Jie Tang, and Shuai Leng
When the number of projections does not satisfy the Shannon/Nyquist sampling requirement, streaking artifacts are inevitable in x-ray computed tomography (CT) images reconstructed using filtered backprojection algorithms. In this letter, the spatial-temporal correlations in dynamic CT imaging have been exploited to sparsify dynamic CT image sequences and the newly proposed compressed sensing (CS) reconstruction method is applied to reconstruct the target image sequences. A prior image reconstructed from the union of interleaved dynamical data sets is utilized to constrain the CS image reconstruction for the individual time frames. This method is referred to as prior image constrained compressed sensing (PICCS). In vivo experimental animal studies were conducted to validate the PICCS algorithm, and the results indicate that PICCS enables accurate reconstruction of dynamic CT images using about 20 view angles, which corresponds to an under-sampling factor of 32. This undersampling factor implies a potential radiation dose reduction by a factor of 32 in myocardial CT perfusion imaging.
High polarization of nuclear spins in liquid state through dynamic nuclear polarization has enabled the direct monitoring of (13)C metabolites in vivo at very high signal-to-noise, allowing for rapid assessment of tissue metabolism. The abundant SNR afforded by this hyperpolarization technique makes high-resolution (13)C 3D-MRSI feasible. However, the number of phase encodes that can be fit into the short acquisition time for hyperpolarized imaging limits spatial coverage and resolution. To take advantage of the high SNR available from hyperpolarization, we have applied compressed sensing to achieve a factor of 2 enhancement in spatial resolution without increasing acquisition time or decreasing coverage. In this paper, the design and testing of compressed sensing suited for a flyback (13)C 3D-MRSI sequence are presented. The key to this design was the undersampling of spectral k-space using a novel blipped scheme, thus taking advantage of the considerable sparsity in typical hyperpolarized (13)C spectra. Phantom tests validated the accuracy of the compressed sensing approach and initial mouse experiments demonstrated in vivo feasibility
At this point, it is interesting to consider the connection between compressed sensing and existing techniques in NMR, such as maximum entropy [15-16] and minimum area [17] reconstruction, used for the related problem of computation of spectra from short, noisy data records. Recently, Stern et al showed that a specific form of iterative thresholding, a technique similar to maximum entropy and minimum area reconstruction, is equivalent to the minimum l1 norm reconstruction in compressed sensing [18]. Additionally, Stern explains how l1 norm reconstruction gives insight into the performance of maximum entropy and minimum area reconstruction. Thus, compressed sensing could be viewed as a generalization of existing NMR techniques.
Following up on that, I have found only the abstract of the paper by Stern, Donoho and Hoch at PubMed: "NMR data processing using iterative thresholding and minimum l(1)-norm reconstruction" by Stern AS, Donoho DL, Hoch JC. The abstract reads:
Iterative thresholding algorithms have a long history of application to signal processing. Although they are intuitive and easy to implement, their development was heuristic and mainly ad hoc. Using a special form of the thresholding operation, called soft thresholding, we show that the fixed point of iterative thresholding is equivalent to minimum l(1)-norm reconstruction. We illustrate the method for spectrum analysis of a time series. This result helps to explain the success of these methods and illuminates connections with maximum entropy and minimum area methods, while also showing that there are more efficient routes to the same result. The power of the l(1)-norm and related functionals as regularizers of solutions to under-determined systems will likely find numerous useful applications in NMR.
This is a link to the last introduced paper:
ReplyDeletehttp://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WJX-4PBDR53-2&_user=809099&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000043939&_version=1&_urlVersion=0&_userid=809099&md5=6cffb5a756d4d73ef6e060af9ec3d0c7