I lied yesterday. Today's entry is the longest one. On MathOverflow, I found a question related to Compressive sensing: Sparse approximate representation of a collection of vectors and was impressed by the quality of the crowd responding to the questions. One of them is Suresh Venkatasubramanian the author among other things of the now famous Geomblog. Suresh as it happens was the person introducing Emmanuel Candes at SODA in Austin last week. After much thoughts about the text of the introduction, he decided to go with a limerick.
I just found the following MS Thesis entitled Compressive Sampling for PPM and FSK Modulated Signals by Shahzad Sarwar Gishkori. The abstract of thesis reads:
Efficiency of the Analog to Digital Converters (ADCs) has always been an issue of concern, especially, when it comes to sampling wide band signals which require extremely high sampling rates. As the systems with wide band signals are gaining the front position in digital communications, the need to find ways to reduce the sampling rates of ADCs but still maintaining exact reconstruction, is becoming ever more resurgent. In this thesis we present the utilization of a newly discovered technique to reduce the sampling rates much below the Nyquist rates. We explore the combination of Compressive Sampling (CS) with Pulse Position Modulation (PPM) and Frequency Shift Keying (FSK) modulation schemes. CS has been suggested for 'sparse signals'. Sparsity helps represent the signal in much less dimensions. We evaluate the suitability of this technique for PPM and FSK modulated signals in multipath fading environments so as to reduce the complexity at the receiver side. We detect the signals without having to first estimate the channel. We present scenarios where we can achieve the most from the combination of CS and PPM/FSK. We extend this frame work to Ultra Wide Band (UWB) applications. We also give theoretical expressions for the error probabilities of our signal models. The real challenge, with regard to using CS, is to reconstruct the signal from its reduced dimensions. In this respect, we have opted for the Orthogonal Matching Pursuit (OMP) algorithm for most of the cases, nonetheless, we elaborate on other available reconstruction methods as well.
I believe I did not link to these SAMPTA papers. Here they are now: Compressed sensing signal models - to infinity and beyond? by Thomas Blumensath, Mike Davies. The abstract reads:
Compressed sensing is an emerging signal acquisition technique that enables signals to be sampled well below the Nyquist rate, given a finite dimensional signal with a sparse representation in some orthonormal basis. In fact, sparsity in an orthonormal basis is only one possible signal model that allows for sampling strategies below the Nyquist rate. We discuss some recent results for more general signal models based on unions of subspaces that allow us to consider more general structured representations. These include classical sparse signal models and finite rate of innovation systems as special cases. We consider the dimensionality conditions for two aspects of the compressed sensing inverse problem: the existence of one-to-one maps to lower dimensional observation spaces and the smoothness of the inverse map. On the surface Lipschitz smoothness of the inverse map appears to limit the applicability of compressed sensing to infinite dimensional signal models. We therefore discuss conditions where smooth inverse maps are possible even in infinite dimensions. Finally we conclude by mentioning some recent work which develops the these ideas further allowing the theory to be extended beyond exact representations to structured approximations.
Sampling of Sparse Signals in Fractional Fourier Domain by Ayush Bhandari, Pina Marziliano. The abstract reads:
In this paper, we formulate the problem of sampling sparse signals in fractional Fourier domain. The fractional Fourier transform (FrFT) can be seen as a generalization of the classical Fourier transform. Extension of Shannon's sampling theorem to the class of signals which are fractional bandlimited shows its association to a Nyquist-like bound. Thus proving that signals that have a non-bandlimited representation in FrFT domain cannot be sampled. We prove that under suitable conditions, it is possible to sample sparse (in time) signals by using the Finite Rate of Innovation (FRI) signal model. In particular, we propose a uniform sampling and reconstruction procedure for a periodic stream of Diracs, which have a nonbandlimited representation in FrFT domain. This generalizes the FRI sampling and reconstruction scheme in the Fourier domain to the FrFT domain.
and Estimating SignalsWith Finite Rate of Innovation From Noisy Samples: A Stochastic Algorithm by Vincent Y.F. Tan , Vivek K. Goyal . The abstract reads:
As an example of the concept of rate of innovation, signals that are linear combinations of a finite number of Diracs per unit time can be acquired by linear filtering followed by uniform sampling. However, in reality, samples are not noiseless. In a recent paper, we introduced a novel stochastic algorithm to reconstruct a signal with finite rate of innovation from its noisy samples. Even though variants of this problem has been approached previously, satisfactory solutions are only available for certain classes of sampling kernels, for example kernels which satisfy the Strang–Fix condition. In our paper, we considered the infinite-support Gaussian kernel, which does not satisfy the Strang–Fix condition. Other classes of kernels can be employed. Our algorithm is based on Gibbs sampling, a Markov chain Monte Carlo (MCMC) method. This paper summarizes the algorithm and provides numerical simulations that demonstrate the accuracy and robustness of our algorithm.
In a different direction, introducing us to the Random Coordinate Descent Method and its convergence property, we have: Efficiency of coordinate descent methods on huge-scale optimization problems by Yurii Nesterov. The abstract reads:
In this paper we propose new methods for solving huge-scale optimization problems.in the same issue we also have:A Fast Algorithm for Total Variation Image Reconstruction from Random Projections by Yunhai Xiao, Junfeng Yang. The abstract:
For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.
Total variation (TV) regularization is popular in image restoration and reconstruction due to its ability to preserve image edges. To date, most research activities on TV models concentrate on image restoration from blurry and noisy observations, while discussions on image reconstruction from random projections are relatively fewer. In this paper, we propose, analyze, and test a fast alternating minimization algorithm for image reconstruction from random projections via solving a TV regularized least-squares problem. The per-iteration cost of the proposed algorithm involves a linear time shrinkage operation, two matrix-vector multiplications and two fast Fourier transforms. Convergence, certain finite convergence and $q$-linear convergence results are established, which indicate that the asymptotic convergence speed of the proposed algorithm depends on the spectral radii of certain submatrix. Moreover, to speed up convergence and enhance robustness, we suggest an accelerated scheme based on an inexact alternating direction method. We present experimental results to compare with an existing algorithm, which indicate that the proposed algorithm is stable, efficient and competitive with TwIST \cite{TWIST} --- a state-of-the art algorithm for solving TV regularization problems.A Pure L1-norm Principal Component Analysis by J.P. Brooks, Jose H. Dula, E.L. Boone. The abstract:
The $L_1$ norm has been applied in numerous variations of principal component analysis (PCA). $L_1$-norm PCA is an attractive alternative to traditional $L_2$-based PCA because it can impart robustness in the presence of outliers and is indicated for models where standard Gaussian assumptions about the noise may not apply. Of all the previously-proposed PCA schemes that recast PCA as an optimization problem involving the $L_1$ norm, none provide globally optimal solutions in polynomial time. This paper proposes an $L_1$-norm PCA procedure based on the efficient calculation of the optimal solution of the $L_1$-norm best-fit hyperplane problem. We present a procedure called $L_1$-PCA$^*$ based on the application of this idea that fits data to hyperplanes of successively smaller dimension, the orthogonal vectors of which are successive directions of minimum variation. The procedure is implemented and tested on a diverse problem suite. Our tests establish conditions for which $L_1$-PCA$^*$ is suitable.I also found the following on Arxiv today:
Regularized Modified-BPDN for compressive sensing with partially known support by Wei Lu, Namrata Vaswani. The abstract reads:
We study the problem of reconstructing a sparse signal from a limited number of noisy measurements, when a part of its support and the signal estimate on it are known. The support and signal estimate can be obtained from prior knowledge, e.g., in a real-time dynamic MRI application, they could be the support and signal estimate from the previous time instant. We propose regularized Modified Basis Pursuit Denoising (regularized modified-BPDN) and bound its reconstruction error. Since the bounds are computable, we are able to use Monte Carlo to compare our average bounds with those for modified-BPDN and for LS-CS under various conditions, and show that (a) the regularized modified-BPDN bounds are always smaller than or equal to those of the others and (b) they are much smaller when the available number of measurements is small.While we were just talking about quantization the other day ( see Q&A with Esther Rodriguez-Villegas on a Compressive Sensing EEG), here is some help on that front: Sobolev Duals for Random Frames and Sigma-Delta Quantization of Compressed Sensing Measurements by Sinan Güntürk, Alex Powell, Rayan Saab, Özgur Yilmaz. The abstract reads:
Quantization of compressed sensing measurements is typically justified by the robust recovery results of Cand\`es, Romberg and Tao, and of Donoho. These results guarantee that if a uniform quantizer of step size $\delta$ is used to quantize $m$ measurements $y = \Phi x$ of a $k$-sparse signal $x \in \R^N$, where $\Phi$ satisfies the restricted isometry property, then the approximate recovery $x^#$ via $\ell_1$-minimization is within $O(\delta)$ of $x$. The simplest and commonly assumed approach is to quantize each measurement independently. In this paper, we show that if instead an $r$th order $\Sigma\Delta$ quantization scheme with the same output alphabet is used to quantize $y$, then there is an alternative recovery method via Sobolev dual frames which guarantees a reduction of the approximation error by a factor of $(m/k)^{(r-1/2)\alpha}$ for any $0 \lt \alpha \lt 1$, if $m \gtrsim_r k (\log N)^{1/(1-\alpha)}$. The result holds with high probability on the initial draw of the measurement matrix $\Phi$ from the Gaussian distribution, and uniformly for all $k$-sparse signals $x$ that satisfy a mild size condition on their supports.I also found a series of courses, summary and upcoming meetings:
Ben Recht is teaching Convex Geometry inHigh-Dimensional Data Analysis, CS838 Topics In Optimization. His interesting introduction slides are here. They make a notable mention of recommender systems.
Here is a small summary on Compressed Sensing: Basic results and self contained proofs by Shai Shalev-Shwartz. The abstract reads:
Compressed sensing is a linear dimensionality reduction technique which utilizes a prior assumption that the original vector is (approximately) sparse in some basis. In this note we summarize some of the known results and provide self contained, easy to follow, proofs.
The Banff International Research Station for Mathematical Innovation and Discovery is organizing a meeting on:
Sparse and Low Rank Approximation
Arriving Sunday, March 6 and departing Friday, March 11, 2011
Organizers: Gitta Kutyniok (University of Osnabrueck), Holger Rauhut (University of Bonn), Joel Tropp (California Institute of Technology), Ozgur Yilmaz (University of British Columbia).
Confirmed Participants
Information for Participants
Objectives
The workshop will focus on the following three subtopics.
Sparse Approximation
The field of sparse approximation is concerned with the problem of representing a target vector using a short linear combination of vectors drawn from a large, fixed collection called a dictionary. The earliest applications arose in approximation theory for functions and operators. A closely related classical problem is variable selection in regression. Modern applications include machine learning, signal and image processing, and numerical analysis.
Although sparse approximation is computationally hard in the general case, contemporary research has identified situations where tractable algorithms can provably solve sparse approximation problems. Some of the most interesting recent advances concern dictionary learning and sparse approximation of random signals.
Compressed Sensing
Compressed sensing is a new field which has seen enormous interest and growth. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered from what was previously considered highly incomplete measurements (information) by using efficient algorithms: convex relaxation methods, such as $ell_1$-minimization and greedy algorithms. This discovery has led to a fundamentally new approach to certain signal and image recovery problems. Applications include signal acquisition, A/D conversion, radar, astronomy and more.
Current research focuses on efficient algorithms as well as to providing recovery guarantees for certain measurement matrices. So far mainly random constructions for good measurement matrices in this context are known, and for this reason mathematical research in compressive sensing exploits many tools from probability theory and geometry of Banach spaces.
The workshop shall initiate a fertile discussion between the different groups of mathematicians, statisticians, and computer scientists, and experts from application areas, on the one hand to advance research in this highly topical area by combining different techniques, and on the other hand to bring problems from application areas to the attention of more theoretical oriented researchers.
Matrix identification: rank minimization, algorithms, and recovery results
Rank minimization has its roots in image compression algorithms, learning theory, and collaborative filtering. In a nutshell, collaborative filtering is the task of making automatic predictions about the interests of a user by collecting taste information from many users. A simple mathematical model for this setup can be described by the matrix completion problem where one has only a few observations of the entries of a low-rank matrix and tries to complete the missing entries. More generally, one may consider the problem of minimizing the rank of a matrix subject to some general linear constraint. A quite surprising recent result due to Candès and Tao in this direction shows that many $ntimes n$ matrices of rank $r$ can be recovered from only ${cal O} (nr log(n) )$ observations of its entries via nuclear norm minimization (the $ell_1$-norm of the singular values). It turns out that many of the ideas and techniques from compressive sensing can be transferred to the rank-minimization problem or other matrix recovery problems.
This direction is currently in its beginning stage and far more developments can be expected in the near future. Therefore, such a workshop would be a unique opportunity to discuss the most recent results, stimulate new research in the interplay between sparse approximation, compressed sensing and low rank matrix recovery problems, and, in particular, formulate open questions, and thereby significantly influence the research in this field in the future.
Application Areas
We would like to put a focus on the following applications of sparse approximation.
- Astronomical image and signal processing
- Radar imaging
- Wireless communication
- Seismology
We will invite experts from these application fields in order to stimulate discussions between practitioners and mathematicians, and thereby initiating new research directions and new research collaborations.
There will also be a Workshop on Pseudo-randomness in Mathematical Structures at the IAS
June 14 - 18, 2010
Organizers: Jean Bourgain, Russell Impagliazzo, Peter Sarnak and Avi Wigderson
A common theme in many seemingly disjoint mathematical areas is to quantify the extent to which natural deterministic mathematical objects have properties similar to those of random objects. For example, is the variance of the distribution of primes approximately that expected for a random set of the same density? When are multiplication and addition over fields ``uncorrelated''? When do the Cayley graphs for groups have the expansion of random graphs? How random are the orbits of natural dynamical systems?
On the other side, computer science applications often require deterministic mathematical structures with such pseudo-random properties. For example, graphs with random-like expansion are used for applications ranging from distributed protocols for leader selection to network routing to sorting. In fact, the central question of computational complexity can be viewed as deterministically constructing a function that is random-like with respect to the ability of small circuits to compute it.
This workshop will aim to highlight parallels and connections between approaches to pseudo-randomness in different areas of mathematics, and between the mathematical and computational approaches to pseudo-randomness. Invited speakers will be eminent researchers in such areas as additive number theory, algorithm design, analysis, combinatorics, computational complexity, cryptography, dynamical systems, ergodic theory, group theory, number theory, and statistical mechanics. The first two days of the workshop will be devoted to tutorials giving an overview of the role of pseudo-randomness in some of the areas mentioned. These tutorials will not require knowledge of any particular disciplines, and will be accessible to graduate students in all areas of mathematics and computer science. The subsequent three days will highlight exciting related research developments, but will also be accessible to a mixed audience of computer scientists and mathematicians. A list of confirmed speakers and a tentative schedule may be found at http://www.math.ias.edu/tentative program.
The workshop will take place at the Institute from June 14-18, 2010. Most events will be held in Wolfensohn Hall. Researchers in all related areas are welcome to participate, and we will have some funding to support graduate students.
The workshop is funded by the National Science Foundation grant DMS-0835373, ``Pseudo-randomness in mathematics and computer science''.
Inquires should be addressed to: pseudo.workshop@gmail.com.
Credit photo: Original for Marie Claire Magazine cover as uncovered by Photoshopdisasters, and the same photo analyzed by the error level analyzer software. The site is at: http://www.errorlevelanalysis.com/. I wonder if there would be a more obvious way to do this with random projections.
No comments:
Post a Comment