I do not own an iPhone / iPod Touch 43 48% Free 32 36% 0.99 $ 7 8% 9.99 $ 2 2% 500 $ (cost of the actual app) 1 1% 49.99 $ (as information rich as Wolfram's app) 1 1% 19.99 $ 1 1% 4.99 $ 1 1% 1.99 $ 1 1% 2.99 $ 0 0% 1000 $ (if there are takers, I'll make a similar one for free) 0 0%
Compressed Sensing: When sparsity meets sampling by Laurent Jacques and Pierre Vandergheynst. The abstract reads:
The recent theory of Compressed Sensing (Candes, Tao & Romberg, 2006, and Donoho, 2006) states that a signal, e.g. a sound record or an astronomical image, can be sampled at a rate much smaller than what is commonly prescribed by Shannon-Nyquist. The sampling of a signal can indeed be performed as a function of its "intrinsic dimension" rather than according to its cut off frequency. This chapter sketches the main theoretical concepts surrounding this revolution in sampling theory. We emphasize also its deep affiliation with the concept of "sparsity", now ubiquitous in modern signal processing. The end of this chapter explains what interesting effects this theory may have on some Compressive Imaging applications.
I like the 8th paragraph entitled Conclusion and the "Science 2.0" Effect which lists other blogs than NB as sources of info on CS. Thanks guys! More precisely, I noted the following at the end:
...Let us conclude this chapter by one important, and somehow sociologic", observation. The Compressed Sensing paradigm is born in what is often called the Science 2.0." evolution. This term represents a real boom in the exchange of information in the scientific community, a byproduct of the steady Internet development. This recent trend take several forms. It consists for instance in the online publication of preprint before their acceptance in scientific journals (through ArXiv [68] or directly on personal homepages), or, more remarkably, in the writing of scientic blogs regularly fed by researchers and often the source of fruitful online discussions with other scientists. Another aspect of this trend is the free exchange of numerical codes guaranteeing the reproducibility of the results....
Kronecker products used for compressive measurement systems promise to reduce substantially the size of measurement systems, hence I am always anxious to see some development in that area. Here is today's: Kronecker Compressive Sensing by Marco Duarte, and Richard Baraniuk. The abstract reads:
from that paper, I noted the reference to this earlier paper: A MULTISCALE FRAMEWORK FOR COMPRESSIVE SENSING OF VIDEO by Jae Young Park and Michael B. Wakin. The abstract reads:Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.
Compressive Sensing (CS) allows the highly efficient acquisition of many signals that could be difficult to capture or encode using conventional methods. From a relatively small number of random measurements, a high-dimensional signal can be recovered if it has a sparse or near-sparse representation in a basis known to the decoder. In this paper, we consider the application of CS to video signals in order to lessen the sensing and compression burdens in single- and multi-camera imaging systems. In standard video compression, motion compensation and estimation techniques have led to improved sparse representations that are more easily compressible; we adapt these techniques for the problem of CS recovery. Using a coarse-to-fine reconstruction algorithm, we alternate between the tasks of motion estimation and motion-compensated wavelet-domain signal recovery. We demonstrate that our algorithm allows the recovery of video sequences from fewer measurements than either frame-by frame or inter-frame difference recovery methods.
In a different direction, since reconstruction is expensive, one is always trying to find out if some evaluation of features can be made while the measurements are still compressed. Here is some work in that direction Motion Estimation from Compressed Linear Measurements by Thirumalai, Vijayaraghavan and Pascal Frossard. the abstract reads:
This paper presents a novel algorithm for computing the relative motion between images from compressed linear measurements. We propose a geometry based correlation model that describes the relative motion between images by translational motion of visual features. We focus on the problem of estimating the motion field from a reference image and a highly compressed image given by means of random projections, which are further quantized and entropy coded. We capture the most prominent visual features in the reference image using geometric basis functions. Then, we propose a regularized optimization problem for estimating the corresponding features in the compressed image, and eventually the dense motion field is generated from the local transform of the geometric features. Experimental results show that the proposed scheme defines an accurate motion field. In addition, when the motion field is used for image prediction, the resulting rate-distortion (RD) performance becomes better than the independent coding solution based on JPEG-2000, which demonstrates the potential of the proposed scheme for distributed coding algorithms.
BAYESIAN COMPRESSED SENSING IMAGING USING A GAUSSIAN SCALE MIXTURE by George Tzagkarakis and Panagiotis Tsakalides. The abstract reads:
The ease of image storage and transmission in modern applications would be unfeasible without compression, which converts high resolution images into a relatively small set of significant transform coefficients. Due to the specific content of many real-world images they are highly sparse in an appropriate orthonormal basis. The inherent property of compressed sensing (CS) theory working simultaneously as a sensing and compression protocol, using a small subset of random incoherent projection coefficients, enables a potentially significant reduction in the sampling and computation costs of images favoring its use in real-time applications which do not require an excellent reconstruction performance. In this paper, we develop a Bayesian CS (BCS) approach for obtaining highly sparse representations of images based on a set of noisy CS measurements, where the prior belief that the vector of projection coefficients should be sparse is enforced by fitting directly its prior probability distribution by means of a Gaussian Scale Mixture (GSM). The experimental results show that our proposed method, when compared with norm-based constrained optimization algorithms, maintains the reconstruction performance, in terms of the reconstruction error and the PSNR, while achieving an increased sparsity using much less basis functions.
MULTIPLE-MEASUREMENT BAYESIAN COMPRESSED SENSING USING GSM PRIORS FOR DOA ESTIMATION by George Tzagkarakis, Dimitris Milioris and Panagiotis Tsakalides. The abstract reads:
Traditional bearing estimation techniques perform Nyquist-rate sampling of the received sensor array signals and as a result they require high storage and transmission bandwidth resources. Compressed sensing (CS) theory provides a new paradigm for simultaneously sensing and compressing a signal using a small subset of random incoherent projection coefficients, enabling a potentially significant reduction in the sampling and computation costs. In this paper, we develop a Bayesian CS (BCS) approach for estimating target bearings based on multiple noisy CS measurement vectors, where each vector results by projecting the received source signal on distinct over-complete dictionaries. In addition, the prior belief that the vector of projection coefficients should be sparse is enforced by fitting directly the prior probability distribution with a Gaussian Scale Mixture (GSM) model. The experimental results show that our proposed method, when compared with norm-based constrained optimization CS algorithms, as well as with single-measurement BCS methods, improves the reconstruction performance in terms of the detection error, while resulting in an increased sparsity.
MODIFIED COMPRESSIVE SENSING FOR REAL-TIME DYNAMIC MR IMAGING by Wei Lu and Namrata Vaswani. The abstract reads:
In this work, we propose algorithms to recursively and causally reconstruct a sequence of natural images from a reduced number of linear projection measurements taken in a domain that is “incoherent” with respect to the image’s sparsity basis (typically wavelet) and demonstrate their application in real-timeMR image reconstruction. For a static version of the above problem, Compressed Sensing (CS) provides a provably exact and computationally efficient solution. But most existing solutions for the actual problem are either offline and non-causal or cannot compute an exact reconstruction (for truly sparse signal sequences), except using as many measurements as those needed for CS. The key idea of our proposed solution (modified-CS) is to design a modification of CS when a part of the support set is known (available from reconstructing the previous image). We demonstrate the exact reconstruction property of modified-CS on full-size image sequences using much fewer measurements than those required for CS. Greatly improved performance over existing work is demonstrated for approximately sparse signals or noisy measurements.
Error estimates for orthogonal matching pursuit and random dictionaries by Pawel Bechler, Przemys law Wojtaszczyk. The abstract reads:
In this paper we investigate the efficiency of the Orthogonal Matching Pursuit (OMP) for random dictionaries. We concentrate on dictionaries satisfying the Restricted Isometry Property. We also introduce a stronger Homogenous Restricted Isometry Property which we show is satisfied with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain upper estimates for the error of approximation by OMP in terms of the error of the best n-term approximation (Lebesgue-type inequalities). We also present and discuss some open problems about OMP. This is a development of recent results obtained by D.L. Donoho, M. Elad and V.N. Temlyakov.
An l1 criterion for dictionary learning by subspace identification by Florent Jaillet, Remi Gribonval, Mark Plumbley and Hadi Zayyani. The abstract reads:
We propose an ℓ1 criterion for dictionary learning for sparse signal representation. Instead of directly searching for the dictionary vectors, our dictionary learning approach identifies vectors that are orthogonal to the subspaces in which the training data concentrate. We study conditions on the coefficients of training data that guarantee that ideal normal vectors deduced from the dictionary are local optima of the criterion. We illustrate the behavior of the criterion on a 2D example, showing that the local minima correspond to ideal normal vectors when the number of training data is sufficient. We conclude by describing an algorithm that can be used to optimize the criterion in higher dimension.
"Texas Hold " Em Algorithms for Distributed Compressive Sensing by Stephen Schnelle, Jason N. Laska, Chinmay Hegde, Marco F. Duarte, Mark A. Davenport, and Richard Baraniuk. The abstract reads:
This is the latest version of Best Basis Compressed Sensing by Gabriel Peyré. The abstract reads:This paper develops a new class of algorithms for signal recovery in the distributed compressive sensing (DCS) framework. DCS exploits both intra-signal and inter-signal correlations through the concept of joint sparsity to further reduce the number of measurements required for recovery. DCS is well suited for sensor network applications due to its universality, computational asymmetry, tolerance to quantization and noise, and robustness to measurement loss. In this paper we propose recovery algorithms for the sparse common and innovation joint sparsity model. Our approach leads to a class of efficient algorithms, the Texas Hold ’Em algorithms, which are scalable both in terms of communication bandwidth and computational complexity.
This paper proposes a best basis extension of compressed sensing recovery. Instead of regularizing the compressed sensing inverse problem with a sparsity prior in a fixed basis, our framework makes use of sparsity in a tree-structured dictionary of orthogonal bases. A new iterative thresholding algorithm performs both the recovery of the signal and the estimation of the best basis. The resulting reconstruction from compressive measurements optimizes the basis to the structure of the sensed signal. Adaptivity is crucial to capture the regularity of complex natural signals. Numerical experiments on sounds and geometrical images indeed show that this best basis search improves the recovery with respect to fixed sparsity priors.
while most recent version of Compressed Genotyping is out.
In the area of sparsity, here are the following papers:
IMAGE MODELING AND ENHANCEMENT VIA STRUCTURED SPARSE MODEL SELECTION by Guoshen Yu, Guillermo Sapiro, and Stephane Mallat. the abstract reads:
An image representation framework based on structured sparsemodel selection is introduced in this work. The corresponding modeling dictionary is comprised of a family of learned orthogonal bases. For an image patch, a model is first selected fromthis dictionary through linear approximation in a best basis, and the signal estimation is then calculated with the selected model. The model selection leads to a guaranteed near optimal denoising estimator. The degree of freedom in the model selection is equal to the number of the bases, typically about 10 for natural images, and is significantly lower than with traditional overcomplete dictionary approaches, stabilizing the representation. For an image patch of size √N ×√N, the computational complexity of the proposed framework is O(N2), typically 2 to 3 orders of magnitude faster than estimation in an overcomplete dictionary. The orthogonal bases are adapted to the image of interest and are computed with a simple and fast procedure. State-of-the-art results are shown in image denoising, deblurring, and inpainting.
Sparsity-driven People Localization Algorithm:Evaluation in Crowded Scenes Environments by Alexandre Alahi, Laurent Jacques, Yannick Boursier and Pierre Vandergheynst. The abstract reads:
We propose to evaluate our sparsity driven people localization framework on crowded complex scenes. The problem is recast as a linear inverse problem. It relies on deducing an occupancy vector, i.e. the discretized occupancy of people on the ground, from the noisy binary silhouettes observed as foreground pixels in each camera. This inverse problem is regularized by imposing a sparse occupancy vector, i.e. made of few non- zero elements, while a particular dictionary of silhouettes linearly maps these non-empty grid locations to the multiple silhouettes viewed by the cameras network. The proposed approach is (i) generic to any scene of people, i.e. people are located in low and high density crowds, (ii) scalable to any number of cameras and already working with a single camera, (iii) unconstraint on the scene surface to be monitored. Qualitative and quantitative results are presented given the PETS 2009 dataset. The proposed algorithm detects people in high density crowd, count and track them given severely degraded foreground silhouettes.
No comments:
Post a Comment