I mentioned that since the contest was over I was a little surprised he could get any results since the original test suite of the contest did not seem available ( I seem to be wrong on that one), Sungkwang then replied:Hello, Igor.
I know the contest ended already, but I would like to let you know CS can be close( not that close now, but) to the adaptive measuring method. This is the solver that I used for solving the problem. Block type CS and result is fairly good for the contest image. I attached the code which is originally from BCS-SPL ( http://www.ece.msstate.edu/~fowler/Publications/ ) .SourceCode/bcs-spl-1.2-1.tar. gz
Thank you and I always appreciate your helpful informative CS website.
Actually, I didn't get the chance to submit the code because I found it too late. For the comparison, I ran the contest file on the same image set using three of Hannes's adaptive measuring method, Robert's Wavelet iterative hard threshold Method, and BCS-SPL. Time and the sum of absolute difference (SAD) generated from grade function in contest files is compared.
#Hannes Naude's method
results: 19,555,844
time: 43.3995
#Robert's Wavelet
results: 36,899,413
time: 98.0466
#BCS-SPL
results: 24,931,826
time: 95.1918
The method we use here is block by block sampling as Robert does so that we can build random (0,1) matrix in explicit way, but the difference is the block size is various from 3~8 depending on the limit of queries or the number of measurement in CS terms. Also Wiener filter is applied on entire image to remove the blocky artifacts during recovering. If you would like to run the program, here I attached the programs. Testsuite.mat is original image set, and Testsuite2.mat is Lenna, Barbara, and Goldhill images sampled at 0.1 subrate. (M/N: M: number of measurements, N: 512*512 number of pixels)
Dear Igor,We would like to draw your attention to our new paper with Prof. Yonina Eldar entitled: "Multichannel Sampling of Pulse Streams at the Rate of Innovation" http://arxiv.org/abs/1004.5070The main contribution of the paper is to provide an efficient and stable sampling scheme for streams of pulses. While this problem was addressed in various papers in the FRI literature, either the sampling rate was above the rate of innovation, or the pulse shape was limited to diracs only. Furthermore, previous methods did not allow for stable recovery with high rates of innovations. In our paper we provide a scheme which operates at the rate of innovation, and guarantees recovery even at high rates of innovation.In addition, we put emphasis on the practical implementation of our scheme. Some of the previous approaches require special design of non-standard analog filters. Our approach is based on standard hardware blocks: rectangular pulse generators, mixers, practical LPFs and integrators. Moreover, we show that the hardware prototype of the modulated wideband converterwhich was mentioned in your blog (December 29, 2009), can be used here also with certain modifications.This paper joins another two recent papers by us, dealing with low rate schemes for time delay estimation problems: "Low Rate Sampling of Pulse Streams with Application to Ultrasound Imaging" http://arxiv.org/abs/1003.2822and"Time Delay Estimation from Low Rate Samples: A Union of Subspaces Approach" http://arxiv.org/abs/0905.2429regards,Kfir and Ronen
Bob Sturm wrote in his blog:
- Paper of the Day (Po'D): Enhancing Sparsity by Reweighted l_1-norm Minimization Edition: A good explanation that can also be supplemented by Emmanuel Candes video on the subject Talks(A/V) (ram / real player file)
- Compressed Sensing and Audio Compression
- Solved Problems in Audio and Speech Processing?: a response to an earlier question.
Ashwin Wagadarikar's thesis is out: Compressive Spectral and Coherence Imaging. The abstract reads:
This dissertation describes two computational sensors that were used to demonstrate applications of generalized sampling of the optical field. The first sensor was an incoherent imaging system designed for compressive measurement of the power spectral density in the scene (spectral imaging). The other sensor was an interferometer used to compressively measure the mutual intensity of the optical field (coherence imaging) for imaging through turbulence. Each sensor made anisomorphic measurements of the optical signal of interest and digital post-processing of these measurements was required to recover the signal. The optical hardware and post-processing software were co-designed to permit acquisition of the signal of interest with sub-Nyquist rate sampling, given the prior information that the signal is sparse or compressible in some basis.
Compressive spectral imaging was achieved by a coded aperture snapshot spectral imager (CASSI), which used a coded aperture and a dispersive element to modulate the optical field and capture a 2D projection of the 3D spectral image of the scene in a snapshot. Prior information of the scene, such as piecewise smoothness of objects in the scene, could be enforced by numerical estimation algorithms to recover an estimate of the spectral image from the snapshot measurement.
Hypothesizing that turbulence between the scene and CASSI would introduce spectral diversity of the point spread function, CASSI's snapshot spectral imaging capability could be used to image objects in the scene through the turbulence. However, no turbulence-induced spectral diversity of the point spread function was observed experimentally. Thus, coherence functions, which are multi-dimensional functions that completely determine optical fields observed by intensity detectors, were considered. These functions have previously been used to image through turbulence after extensive and time-consuming sampling of such functions. Thus, compressive coherence imaging was attempted as an alternative means of imaging through turbulence.
Compressive coherence imaging was demonstrated by using a rotational shear interferometer to measure just a 2D subset of the 4D mutual intensity, a coherence function that captures the optical field correlation between all the pairs of points in the aperture. By imposing a sparsity constraint on the possible distribution of objects in the scene, both the object distribution and the isoplanatic phase distortion induced by the turbulence could be estimated with the small number of measurements made by the interferometer.
Too high sampling rate is the bottleneck to wideband spectrum sensing for cognitive radio (CR). As the survey shows that the sensed signal has a sparse representation in frequency domain in the mass, compressed sensing (CS) can be used to transfer the sampling burden to the digital signal processor. An analog to information converter (AIC) can randomly sample the received signal with sub-Nyquist rate to obtained the random measurements. Considering that the static frequency spectrum allocation of primary radios means the bounds between different primary radios is known in advance, here we incorporate information of the spectrum boundaries between different primary user as a priori information to obtain a mixed l2/l1 norm denoising operator (MNDO). In the MNDO, the estimated power spectrum density (PSD) vector is divided into block sections with bounds corresponding different allocated primary radios. Different from previous standard l1-norm constraint on the whole PSD vector, a sum of the l2 norm of each section of the PSD vector is minimized to encourage the local grouping distribution while the sparse distribution in mass, while a relaxed constraint is used to improve the denoising performance. Simulation demonstrates that the proposed method outperforms standard sparse spectrum estimation in accuracy, denoising ability, etc.Robust Compressive Wideband Spectrum Sensing with Sampling Distortion by Yipeng Liu, Qun Wan. The abstract reads:
Too high sampling rate is the bottleneck to wideband spectrum sensing for cognitive radio (CR). As the survey shows that the sensed signal has a sparse representation in frequency domain, compressed sensing (CS) can be used to transfer the sampling burden to the digital signal processor. But the standard sparse signal recovery ways do not consider the distortion in the analogue to information converter (AIC) which randomly samples the received signal with sub-Nyquist rate. In practice various non-ideal physical effects would lead to distortion in AIC. Here we model the sampling distortion as a bounded additive noise. An anti-sampling-distortion constraint (ASDC) in the form of a mixed \ell 2 and \ell 1 norm is deduced from the sparse signal model with bounded sampling error. And the sparse constraint is in the form of minimization of the \ell 1 norm of the estimated signal. Then we combine the sparse constraint with the ASDC to get a novel robust sparse signal recovery operator with sampling distortion. Numerical simulations demonstrate that the proposed method outperforms standard sparse wideband spectrum sensing in accuracy, denoising ability, etc.
Sparse Support Recovery with Phase-Only Measurements by Yipeng Liu, Qun Wan. The abstract reads:
Sparse support recovery (SSR) is an important part of the compressive sensing (CS). Most of the current SSR methods are with the full information measurements. But in practice the amplitude part of the measurements may be seriously destroyed. The corrupted measurements mismatch the current SSR algorithms, which leads to serious performance degeneration. This paper considers the problem of SSR with only phase information. In the proposed method, the minimization of the l1 norm of the estimated sparse signal enforces sparse distribution, while a nonzero constraint of the uncorrupted random measurements' amplitudes with respect to the reconstructed sparse signal is introduced. Because it only requires the phase components of the measurements in the constraint, it can avoid the performance deterioration by corrupted amplitude components. Simulations demonstrate that the proposed phase-only SSR is superior in the support reconstruction accuracy when the amplitude components of the measurements are contaminated.
Super-resolution single-beam imaging via compressive sampling by Wenlin Gong, Shensheng Han. The abstract reads:
Based on compressive sampling techniques and short exposure imaging, super-resolution imaging with thermal light is experimentally demonstrated exploiting the sparse prior property of images for standard conventional imaging system. Differences between super-resolution imaging demonstrated in this letter and super-resolution ghost imaging via compressive sampling (arXiv. Quant-ph/0911.4750v1 (2009)), and methods to further improve the imaging quality are also discussed.
While lurking around, I found several presentations of interest:
* Sublinear Compressive Sensing (CS) and Support Weight Enumerators of Codes: A Matroid Theory Approach by Olgica Milenkovic (Joint work with Wei Dai and Hoa Vinh Pham)
- Connections between CC-LPD and CS-LPD
- Message-passing iterative decoding and linear programming decoding: news and views Part 1 [pdf] Part 2 [pdf]
* There was a workshop on Linear Programming and Message-Passing Approaches to High-Density Parity-Check Codes and High-Density Graphical Models in Tel Aviv University, Israel on March 1 - 2, 2010. Here is a list of presentations made there:
Ilya Poltorak, Dror Baron, and Deanna NeedellWith regards to employment and fellowship, I found the following:
Hybrid dense/sparse matrices in compressed sensing reconstruction [ppt]
David Burshtein
Iterative approximate linear programming decoding of LDPC codes with linear complexity [pdf]
Michael Chertkov
On dense graphical models and their potential for coding (tutorial) [pdf]
Alex Dimakis, R. Smarandache, and Pascal O. Vontobel
A connection between compressed sensing and binary linear channel codes [ppt]
Jacob Goldberger and Amir Leshem
A MIMO detector based on Gaussian tree approximation of a fully connected graph [pdf]
Warren J. Gross
Stochastic decoding of LDPC codes over GF(q) [pdf]
Nissim Halabi and Guy Even
LP decoding of regular LDPC codes in memoryless channels [pdf]
Joakim G. Knudsen, Constanza Riera, Lars Eirik Danielsen, Matthew G. Parker, and Eirik Rosnes
On iterative decoding of HDPC codes using weight-bounding graph operations [pdf]
Marcel Bimberg, Emil Matus, Michael Lentmaier, Gerhard Fettweis
Soft-decision decoding of Reed-Solomon codes with binary and non-binary belief propagation [pdf]
Amir Leshem and Jacob Goldberger
Solving integer least squares problems using reconstruction from projections and redundant information [pdf]
Talya Meltzer, David Sontag, Amir Globerson, Tommi Jaakkola, and Yair Weiss
Tightening LP relaxations for MAP using message passing [ppt]
Thorsten Hehn, Stefan Laendner, Olgica Milenkovic, and Johannes Huber
High-density error-correction via automorphism group decoding [pdf]
Andrea Montanari and Elchanan Mossel
Smooth compression and nonlinear sparse-graph codes [pdf]
Samuel Ouzan and Yair Be'ery
Moderate-density parity-check codes [pdf]
Ori Shental and Naom Shental
Belief propagation for sparse signal reconstruction [pdf]
Jens Zumbrägel, Mark F. Flanagan, Vitaly Skachek
On the pseudocodeword redundancy [pdf]
Stefan Ruzika, Akin Tanatmis, Horst W. Hamacher, Norbert Wehn, Frank Kienle, and Mayur Punekar
Numerical comparison of IP formulations in ML decoding and minimum distance calculation [pdf]
Pascal O. Vontobel
Linear programming and message-passing approaches to parity-check codes and graphical models: successes and challenges (tutorial) [pdf]
Tadashi Wadayama
Concave penalty method for improving interior point LP decoding [pdf]
Alex Yufit, Asi Lifshitz, and Yair Be'ery
Near-ML linear programming decoding of HDPC codes [pdf]
* A BITS – HP LABS INDIA PhD FELLOWSHIP for Research related to Information Technologies features compressed sensing as one of the subject supporting the fellowship. More information can be found here.
and
* A faculty position:
Faculty Position in Optimization and Applications at U. of Wisconsin-Madison
The Wisconsin Institute for Discovery (WID) at the University of Wisconsin-Madison (www.discovery.wisc.edu) invites applications for faculty openings in Optimization and its Applications. The WID optimization theme aims to develop and apply optimization technology to systems-level problems emerging in science and engineering applications in an interdisciplinary, integrative, and collaborative fashion.
Collaborations with biology and medical researchers will be a focus. Our interests encompass (but are not limited to): (i) Development of core optimization technology and large scale computational methodology;
(ii) Planning techniques for medical treatment that exploit unfolding understanding of the physical system; (iii) Applications of simulation and stochastic optimization techniques; (iv) Use of optimization and statistical methods to understand and predict systems phenomena, even when competition exists between entities; (v) Sparse optimization and image reconstruction leveraging compressed sensing frameworks, optimization algorithms, and powerful computational platforms.Opportunities are available at the Assistant, Associate or Full Professor level. Successful candidates will occupy a new state-of-the-art and centrally located WID research facility specifically designed to spark and support cross-disciplinary collaborations. WID is the public half of an exciting public-private pair of Institutes that will promote basic research and facilitate the translation of new discoveries to practice.
Each successful candidate will be appointed to the department of the University that most appropriately matches their experience and interests. The candidate will be expected to develop a vigorous, independent research program; attract and maintain extramural funding for their research program; teach undergraduate and/or graduate courses; develop new course(s) in their area of expertise as appropriate; supervise graduate and postgraduate research; participate in faculty governance activities in the department, college and/or University; and actively engage with the national and international scientific community.
Applicants must submit a cover letter, curriculum vitae, statement of teaching interests, and a statement of current and future research plans related to optimization and its applications. The full application, submitted as a PDF should not exceed 10 pages. In addition, three references from persons knowledgeable with the applicant’s research, leadership and/or teaching abilities must be separately supplied through the website at: https://oberon.cs.wisc.edu/Recruiting/
Further details are available at: http://www.ohr.wisc.edu/pvl/pv_063829.html
I just added it to the Compressive Sensing Jobs page.
Hi Igor,
ReplyDeleteI also ran the algorithm against mine for comparison. My code seemed slightly faster but performed worse. So finally we have a CS algorithm that defeats Laplace's equation!
Vladirator10:
results: 26853807
time: 99.8700
solver_BCS_SPL:
results: 24931826
time: 112.6600
A note about the interpolation algorithms:
ReplyDeleteFor a given number of measurements (query limit), and under a strict time limit, interpolation seems like a very good approach. However, when decoding time becomes non-mission critical, and instead the focus is the quality of recovery, I think more sophisticated CS algorithms can provide more accurate results (given enough time, that is).
With more time/iterations, the CS algorithm should be able to tease more information out of those measurements than a simple interpolation. I know for Sungkwang's code (and Roberts), in order to get under the time limit, the algorithm had to be severely handicapped.
Eric, It would be interesting to find a CS algorithm that defeats Howe's Interpol in terms of reconstruction quality: that seems like a good benchmark.
ReplyDeleteEric,
ReplyDeleteYou have a good point, but for reasons attending to convincing people that CS is interesting, I believe The Vlad (Tony) is right.
Igor.
Well, actually, I ran the Interpol solver as well and it seems to perform almost identically, in terms of reconstruction quality (results: 24963122, time: 47.7300) to solver_BCS_SPL. So interpolation has been matched but not yet exceeded.
ReplyDeleteTony (The Vlad),
ReplyDeleteIt's good to go from "It's not even that good" to "All that for this" but like others, I would have expected more like a "wow".
Igor.
Igor,
ReplyDeleteIs there any benchmark database in the CS literature against which to test algorithms? If not, establishing such a database would seem a reasonable step in standardizing performance estimates. My primary interest was from a biological/computer vision perspective: how do CS methods compare to conventional reconstruction algorithms for under-sampled images (m samples << n pixels)? In the human retina, for instance, photoreceptor density varies dramatically with eccentricity and there are 'holes' without any receptors at all. Yet our brains manage to fill-in these gaps, by means of some form of interpolation scheme. As the visual cortex is known to contain a wavelet-like representation of the visual field, the application of CS methods to this problem might seem reasonable at first glance. Aside from reconstruction fidelity, however, a serious constraint for real-time vision systems is processing speed. Solving an iterative L1 optimization problem might therefore prove to be prohibitively slow. More generally, the idea that the brain solves optimization problems in real-time seems a little unrealistic, at least with our current knowledge of brain function.
Vlad,
ReplyDeleteRight now I'm sitting on the code to auto generate new test-sets from image databases in the same manner as was used for the MathWorks contest. I just haven't figured out what to do with it yet, but I do have some ideas for setting up a system to handle future submissions in kind of the same manner.
Currently, there are no "standards" for CS on images, unless you count the standard image processing ones (lenna, barbara, cameraman...).