On the subject of Knowledge Diffusion, Mark Newman just came out with a provocative study entitled: The first-mover advantage in scientific publication. From the abstract, one can read:
.....On the other hand, there are some papers, albeit only a small fraction, that buck the trend and attract significantly more citations than theory predicts despite having relatively late publication dates. We suggest that papers of this kind, though they often receive comparatively few citations overall, are probably worthy of our attention....One wonders how initiative like the Compressive Sensing Resources at Rice, this blog or any of the blogs/pages featured in the Compressive Sensing 2.0 page and in the Compressive 2.0 Community page will change the dynamic of the linkage between papers. An enterprising student could already use either the papers listed in the Rice repository and in the set of recent links of this blog, and perform some naive Latent Sematic Indexing for unstructured documents like the pdfs in these pages and give us an appreciation of the extent of this recent linkage. After reading some of the recent papers or the exchange on blogs mentioned above and looking at the statistics of this blog or that of the Rice sites, I am personally convinced of this acceleration, but this is just a hunch.
It is an interest of mine to see how compressed sensing would fare with the linear transport equation, hence it is a welcoming sight to see much attempts at dealing with the mathematical considerations of compression in the eigenfunction domain of the Helmoltz equation.
Laurent Demanet and Gabriel Peyré did just that in the impressive Compressive Wave Computation. The abstract reads:
This paper considers large-scale simulations of wave propagation phenomena. We argue that it is possible to accurately compute a wavefield by decomposing it onto a largely incomplete set of eigenfunctions of the Helmholtz operator, chosen at random, and that this provides a natural way of parallelizing wave simulations for memory-intensive applications. Where a standard eigenfunction expansion in general fails to be accurate if a single term is missing, a sparsity-promoting l1 minimization problem can vastly enhance the quality of synthesis of a wavefield from low-dimensional spectral information. This phenomenon may be seen as "compressive sampling in the Helmholtz domain", and has recently been observed to have a bearing on the performance of data extrapolation techniques in seismic imaging [41].This paper shows that l1-Helmholtz recovery also makes sense for wave computation, and identifies a regime in which it is provably effective: the one-dimensional wave equation with coefficients of small bounded variation. Under suitable assumptions we show that the number of eigenfunctions needed to evolve a sparse wavefield defined on N points, accurately with very high probability, is bounded byC(\eta) logN . log logN;where C(\eta) is related to the desired accuracy and can be made to grow at a much slower rate than N when the solution is sparse. The PDE estimates that underlie this result are new to the authors' knowledge andmay be of independent mathematical interest; they include an L1 estimate for the wave equation, an L_infinity - L2 estimate of extension of eigenfunctions, and a bound for eigenvalue gaps in Sturm-Liouville problems. In practice, the compressive strategy makes sense because the computation of eigenfunctions can be assigned to different nodes of a cluster in an embarrassingly parallel way. Numerical examples are presented in one spatial dimension and show that as few as 10 percents of all eigenfunctions can suce for accurate results. Availability of a good preconditioner for the Helmholtz equation is important and also discussed in the paper. Finally, we argue that the compressive viewpoint suggests a competitive parallel algorithm for an adjoint-state inversion method in reflection seismology.
Using the RIP "compliant" Toeplitz matrices found by themselves and used by others, Jarvis Haupt,Waheed Bajwa, Gil Raz and Robert Nowak used them in another area entitled: Toeplitz compressed sensing matrices with applications to sparse channel estimation,
Compressed sensing (CS) has recently emerged as a powerful signal acquisition paradigm. In essence, CS enables the recovery of high-dimensional but sparse (or nearly sparse) vectors from relatively few linear observations in the form of projections of the signal onto a collection of test vectors. Existing results show, for example, that if the entries of the test vectors are independent realizations of random variables with certain distributions, such as zero-mean Gaussian, then with high probability the resulting observations sufficiently encode the information in the unknown signal and recovery can be accomplished by solving a tractable convex optimization. This work provides a significant extension of current CS theory. A novel technique is proposed that allows theoretical treatment of CS in settings where the entries of the test vectors exhibit structured statistical dependencies, from which it follows that CS can be effectively utilized in linear, time-invariant (LTI) system identification problems. An immediate application is in the area of sparse channel estimation, where the main results of this work can be applied to the recovery of sparse (or nearly sparse) wireless multipath channels. Specifically, it is shown in the paper that time-domain probing of a wireless channel with a (pseudo-)random binary sequence, along with the utilization of CS reconstruction techniques, provides significant improvements in estimation accuracy when compared with traditional least-squares based linear channel estimation strategies. Abstract extensions of the main results, utilizing the theory of equitable graph coloring to tolerate more general statistical dependencies across the test vectors, are also discussed.
According to Wiley and Sons, David Brady will have a book out in February 2009 entitled: Optical Imaging and Spectroscopy. According to the site it should cost about 69.90 Euros or $84 at Amazon. Additional information include:
450 Pages, Hardcover
ISBN-10: 0-470-04823-9
ISBN-13: 978-0-470-04823-8 - John Wiley & Sons
Detailed descriptionWe have covered many of these issues before and it is nice that it can be put in one book. I sure would love it to have it on my book shelf. I have added it to my Amazon Wish List.
Optical Imaging and Spectroscopy covers the conceptual basis of optical sensor design. The author's objective in writing Optical Imaging and Spectroscopy is to communicate a novel approach of optical system design to students and to the imaging and spectroscopy research and development community. The new approach includes three components that have not previously been covered in book form:
- Direct and simple exposition of the interface between continuous fields and discrete representations, especially including wavelet analysis, discretization on focal planes, multiplex measurement and compressive sampling.
- Straightforward integration of coherence and Fourier analysis, including the van Cittert Zernike theorem, coherence and projection tomography and the constant radiance theorem.
- Integrated consideration of imaging and spectroscopy in development of general tools for optical sensor design, including analyses of recent advances in coded wave front imaging as coded aperture spectroscopy as well as comparative system analysis tools.
From the contents
Preface.
Acknowledgments.
1. Past, present and future.
1.1 Three revolutions.
1.2 Computational imaging.
1.3 Overview.
1.4 The fourth revolution.
Problems.
2. Geometric imaging.
2.1 Visibility.
2.2 Optical elements.
2.3 Focal imaging.
2.4 Imaging systems.
2.5 Pinhole and coded aperture imaging.
2.6 Projection tomography.
2.7 Reference structure tomography.
Problems.
3. Analysis.
3.1 Analytical tools.
3.2 Fields and transformations.
3.3 Fourier analysis.
3.4 Transfer functions and filters.
3.5 The Fresnel transformation.
3.6 The Whittaker-Shannon sampling theorem.
3.7 Discrete analysis of linear transformations.
3.8 Multiscale sampling.
3.9 B-splines.
3.10 Wavelets.
Problems.
4. Wave imaging.
4.1 Waves and fields.
4.2 Wave model for optical fields.
4.3 Wave propagation.
4.4 Diffraction.
4.5 Wave analysis of optical elements.
4.6 Wave propagation through thin lenses.
4.7 Fourier analysis of wave imaging.
4.8 Holography.
Problems.
5. Detection.
5.1 The Optoelectronic interface.
5.2 Quantum mechanics of optical detection.
5.3 Optoelectronic detectors.
5.3.1 Photoconductive detectors.
5.3.2 Photodiodes.
5.4 Physical characteristics of optical detectors.
5.5 Noise.
5.6 Charge coupled devices.
5.7 Active pixel sensors.
5.8 Infrared focal plane arrays.
Problems.
6. Coherence imaging.
6.1 Coherence and spectral fields.
6.2 Coherence propagation.
6.3 Measuring coherence.
6.4 Fourier analysis of coherence imaging.
6.5 Optical coherence tomography.
6.6 Modal analysis.
6.7 Radiometry.
Problems.
7. Sampling.
7.1 Samples and pixels.
7.2 Image plane sampling on electronic detector arrays.
7.3 Color imaging.
7.4 Practical sampling models.
7.5 Generalized sampling.
Problems.
8. Coding and inverse problems.
8.1 Coding taxonomy.
8.2 Pixel coding.
8.3 Convolutional coding.
8.4 Implicit coding.
8.5 Inverse problems.
Problems.
9. Spectroscopy.
9.1 Spectral measurements.
9.2 Spatially dispersive spectroscopy.
9.3 Coded aperture spectroscopy.
9.4 Interferometric Spectroscopy.
9.5 Resonant spectroscopy.
9.6 Spectroscopic filters.
9.7 Tunable filters.
9.8 2D spectroscopy.
Problems.
10. Computational imaging.
10.1 Imaging systems.
10.2 Depth of field.
10.3 Resolution.
10.4 Multiple aperture imaging.
10.5 Generalized sampling revisited.
10.6 Spectral imaging.
Problems.
References.
Credit Photo: NASA/JPL-Caltech/University of Arizona/Texas A&M, sol 97, shadow of the Sun through some aperture of the Phoenix spacecraft.
No comments:
Post a Comment