Jong Chul Ye just sent me the following:
We believe that our work has finally addresses the open problem for joint sparsity.
Thanks Jong. Looks like MUSIC should have some bearing on some of the blind deconvolution/ calibration problem faced in some compressive sensing hardware implementations.
Following Tuesday's entry, I sent the following to Chinmay Hegde
I just read your latest paper entitled: Sampling and Recovery of Pulse Streams and was wondering when reading figure 6 what the PSF looked like ? and if you had compared it to that of the camera that took this shot from the Hubble telescope ?
Also do you intend on releasing a code version of algorithm 2 at some point in time ?
We got the test image off the web, so I'm not quite sure of the imaging parameters of the camera that actually took that shot.
Chin kindly responded with:
In our recovery algorithm, we eyeballed the PSF size to be about 5x5, and used this as a parameter for our recovery algorithm. The "true" PSF is likely of a different size, thereby resulting in model mismatch. It's unclear as of now how to effectively deal with this.
As for the code: the key elements of the code can be found in the model-based compressive sensing toolbox:
http://dsp.rice.edu/software/model-based-compressive- sensing-toolbox
especially for performing the model-approximation step (for 1D signals) in Algorithm 2. We hope to update the toolbox with the pulse stream recovery algorithm soon.
Thanks Chin.
Yannis Kopsinis let me know that I did not catch his paper on arxiv as my query looks for a lot of words in the abstract but none of the ones he and his co-author used:
Online Sparse System Identification and Signal Reconstruction using Projections onto Weighted $\ell_1$ Balls by Yannis Kopsinis, Konstantinos Slavakis, Sergios Theodoridis. The abstract reads:
This paper presents a novel projection-based adaptive algorithm for sparse signal and system identification. The sequentially observed data are used to generate an equivalent sequence of closed convex sets, namely hyperslabs. Each hyperslab is the geometric equivalent of a cost criterion, that quantifies "data mismatch". Sparsity is imposed by the introduction of appropriately designed weighted $\ell_1$ balls. The algorithm develops around projections onto the sequence of the generated hyperslabs as well as the weighted $\ell_1$ balls. The resulting scheme exhibits linear dependence, with respect to the unknown system's order, on the number of multiplications/additions and an $\mathcal{O}(L\log_2L)$ dependence on sorting operations, where $L$ is the length of the system/signal to be estimated. Numerical results are also given to validate the performance of the proposed method against the LASSO algorithm and two very recently developed adaptive sparse LMS and LS-type of adaptive algorithms, which are considered to belong to the same algorithmic family.
Thanks Yannis
My crawler found the following:
In other news:
My crawler found the following:
On MMSE and MAP Denoising Under Sparse Representation Modeling Over a Unitary Dictionary by Javier Turek, Irad Yavneh, Matan Protter, Michael Elad. The abstract reads:
Among the many ways to model signals, a recent approach that draws considerable attention is sparse representation modeling. In this model, the signal is assumed to be generated as a random linear combination of a few atoms from a pre-specified dictionary. In this work we analyze two Bayesian denoising algorithms -- the Maximum-Aposteriori Probability (MAP) and the Minimum-Mean-Squared-Error (MMSE) estimators, under the assumption that the dictionary is unitary. It is well known that both these estimators lead to a scalar shrinkage on the transformed coefficients, albeit with a different response curve. In this work we start by deriving closed-form expressions for these shrinkage curves and then analyze their performance. Upper bounds on the MAP and the MMSE estimation errors are derived. We tie these to the error obtained by a so-called oracle estimator, where the support is given, establishing a worst-case gain-factor between the MAP/MMSE estimation errors and the oracle's performance. These denoising algorithms are demonstrated on synthetic signals and on true data (images).
In other news:
- Mendeley has a tag for compressive sensing and Compressed Sensing
- MLcomp: a website for objectively comparing ML algorithms shouldn't we do this for compressive sensing ?
- A presentation by Dimitris Metaxas at U Penn, Event Dates: Friday April 16, 2010 from 11:00am to 12:00pm
We will first present a new class of model-based learning methods which include hypergraph and structured sparse learning for vision understanding. In our hypergraph framework, a hyperedge is defined by a set of vertices with similar attributes. The complex relationship between the mages can be easily represented by different hyperedges according to different visual cues. We applied unsupervised and semi-supervided hypergraph learning to video object segmentation and image retrieval. Extensive experiments demonstrate its advantages over traditional graph models. Our structured sparsity framework is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. We will show applications in sparse learning, compressive sensing, computer vision and medical imaging. Time permitting we will show applications of stochastic deformable models to facial tracking, body tracking, ASL recognition and cardiac motion analysis/simulation.
- New patented technology for improving cardiac CTs receives NIH support: Some of the work can be found in: Compressed sensing based interior tomography by Hengyong Yu and Ge Wang. The abstract reads:
While conventional wisdom is that the interior problem does not have a unique solution, by analytic continuation we recently showed that the interior problem can be uniquely and stably solved if we have a known sub-region inside a region of interest (ROI). However, such a known sub-region is not always readily available, and it is even impossible to find in some cases. Based on compressed sensing theory, herewe prove that if an object under reconstruction is essentially piecewise constant, a local ROI can be exactly and stably reconstructed via the total variation minimization. Because many objects in computed tomography (CT) applications can be approximately modeled as piecewise constant, our approach is practically useful and suggests a new research direction for interior tomography. To illustrate the merits of our finding, we develop an iterative interior reconstruction algorithm that minimizes the total variation of a reconstructed image and evaluate the performance in numerical simulation.
- Found behind a paywall: A new approach to limited angle tomography using the compressed sensing framework by Ludwig Ritschl, Frank Bergner, and Marc Kachelriess. The abstract reads:
The limited angle problem is a well-known problem in computed tomography. It is caused by missing data over a certain angle interval, which make an inverse Radon transform impossible. In daily routine this problem can arise for example in tomosynthesis, C-arm CT or dental CT. In the last years there has been a big development in the field of compressed sensing algorithms in computed tomography, which deal very good with incomplete data. The most popular way is to integrate a minimal total variation norm in form of a cost function into the iteration process. To find an exact solution of such a constrained minimization problem, computationally very demanding higher order algorithms should be used. Due to the non perfect sparsity of the total variation representation, reconstructions often show the so called staircase effect. The method proposed here uses the solutions of the iteration process as an estimation for the missing angle data. Compared to a pure compressed sensing-based algorithm we reached much better results within the same number of iterations and could eliminate the staircase effect. The algorithm is evaluated using measured clinical datasets.
Credit: NASA, This compilation of video shows some of the first imagery and data sent back from NASA's Solar Dynamics Observatory (SDO). Most of the imagery comes from SDO's AIA instrument, and different colors are used to represent different temperatures, a common technique for observing solar features. SDO sees the entire disk of the Sun in extremely high spacial and temporal resolution and this allows scientists to zoom in on notable events like flares, waves, and sunspots.
No comments:
Post a Comment