Monday, March 07, 2011

CS:The Long Post of the Week

In-Q-Tel is providing some venture funding to InView Technology  Here is the InView's careers site.Some people are churning along the TRL scale.

There was a Summer School on the other side of the equator this past week:  Sparsity and Model Selection, February 28 to March 4, 2011, Montevideo, Uruguay
This summer school consists in series of accessible lectures on model selection and sparse models. Model selection is an important issue of modern statistics. Besides the classical results using penalties based on the size of the model: Cp AIC BIC, a new kind of techniques have appeared with L1 penalties and the LASSO (Tibshirani, 1996) and the work of Donoho and Candès on sparse models. We hope the participation of both graduate students coming from regional Universities (Uruguay, Argentina and others) and researchers interested by the topic.
Presentations can be accessed on the site.

We have several papers today starting with the following: Accelerated Iterative Hard Thresholding by Thomas Blumensath. The abstract reads:
The iterative hard thresholding algorithm (IHT) is a powerful and versatile algorithm for compressed sensing and other sparse inverse problems. The standard IHT implementation faces two challenges when applied to practical problems. The step size parameter has to be chosen appropriately and, as IHT is based on a gradient descend strategy, convergence is only linear. Whilst the choice of the step size can be done adaptively as suggested previously, this letter studies the use of acceleration methods to improve convergence speed. Based on recent suggestions in the literature, we show that a host of acceleration methods are also applicable to IHT. Importantly, we show that these modifications not only significantly increase the observed speed of the method, but also satisfy the same strong performance guarantees enjoyed by the original IHT method.
The code that can reproduce the figures of this paper can be found here. I'll add it to the reconstruction section of the big picture soon.

Model Identification of a Network as Compressing Sensing by Donatello Materassi, G. Innocenti, Laura Giarré, Murti Salapaka. The abstract reads:
In many applications, it is important to derive information about the topology and the internal connections of dynamical systems interacting together. Examples can be found in fields as diverse as Economics, Neuroscience and Biochemistry. The paper deals with the problem of deriving a descriptive model of a network, collecting the node outputs as time series with no use of a priori insight on the topology, and unveiling an unknown structure as the estimate of a "sparse Wiener filter". A geometric interpretation of the problem in a pre-Hilbert space for wide-sense stochastic processes is provided. We cast the problem as the optimization of a cost function where a set of parameters are used to operate a trade-off between accuracy and complexity in the final model. The problem of reducing the complexity is addressed by fixing a certain degree of sparsity and finding the solution that "better" satisfies the constraints according to the criterion of approximation. Applications starting from real data and numerical simulations are provided.
Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation by Vassilis Kekatos, Georgios B. Giannakis. The abstract reads:
Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.
Geometry of log-concave Ensembles of random matrices and approximate reconstruction by Radosław Adamczak, Rafał Latała, Alexander E. Litvak, Alain Pajor, Nicole Tomczak-Jaegermann. The abstract reads:
We study the Restricted Isometry Property of a random matrix $\Gamma$ with independent isotropic log-concave rows. To this end, we introduce a parameter $\Gamma_{k,m}$ that controls uniformly the operator norm of sub-matrices with $k$ rows and $m$ columns. This parameter is estimated by means of new tail estimates of order statistics and deviation inequalities for norms of projections of an isotropic log-concave vector.

Loopy Belief Propagation, Bethe Free Energy and Graph Zeta Function by Yusuke Watanabe, Kenji Fukumizu. The abstract reads:
We propose a new approach to the theoretical analysis of Loopy Belief Propagation (LBP) and the Bethe free energy (BFE) by establishing a formula to connect LBP and BFE with a graph zeta function. The proposed approach is applicable to a wide class of models including multinomial and Gaussian types. The connection derives a number of new theoretical results on LBP and BFE. This paper focuses two of such topics. One is the analysis of the region where the Hessian of the Bethe free energy is positive definite, which derives the non-convexity of BFE for graphs with multiple cycles, and a condition of convexity on a restricted set. This analysis also gives a new condition for the uniqueness of the LBP fixed point. The other result is to clarify the relation between the local stability of a fixed point of LBP and local minima of the BFE, which implies, for example, that a locally stable fixed point of the Gaussian LBP is a local minimum of the Gaussian Bethe free energy.
Submodular meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection by Abhimanyu Das, David Kempe. The abstract reads:
We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing greedy algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of greedy algorithms than other spectral parameters.

Compressive Sensing for Cluster Structured Sparse Signals: Variational Bayes Approach by Lei Yu, Jean-Pierre Barbot, Gang Zheng, Hong Sun. The abstract reads:
Compressive Sensing (CS) provides a new paradigm of sub-Nyquist sampling which can be considered as an alternative to Nyquist sampling theorem. In particular, providing that signals are with sparse representations in some known space (or domain), information can be perfectly preserved even with small amount of measurements captured by random projections. Besides sparsity prior of signals, the inherent structure property underline some specific signals is often exploited to enhance the reconstruction accuracy and promote the ability of recovery. In this paper, we are aiming to take into account the cluster structure property of sparse signals, of which the nonzero coefficients appear in clustered blocks. By modeling simultaneously both sparsity and cluster prior within a hierarchical statistical Bayesian framework, a nonparametric algorithm can be obtained through variational Bayes approach to recover original sparse signals. The proposed algorithm could be slightly considered as a generalization of Bayesian CS, but with a consideration on cluster property. Consequently, the performance of the proposed algorithm is at least as good as Bayesian CS, which is verified by the experimental results.

Model-Based Compressive Sensing for Multi-party Distant Speech recongnition by Afsaneh Asaei, Herve Bourlard and Volkan Cevher. The abstract reads:
We leverage the recent algorithmic advances in compressive sensing, and propose a novel source separation algorithm for efficient recovery of convolutive speech mixtures in spectro-temporal domain. Compared to the common sparse component analysis techniques, our approach fully exploits structured sparsity models to obtain substantial improvement over the existing state-of-the-art. We evaluate our method for separation and recognition of a target speaker in a multi-party scenario. Our results provide compelling evidence of the effectiveness of sparse recovery formulations in speech recognition.

In this paper, we propose a novel compressive sensing (CS) based approach for sparse target counting and positioning in wireless sensor networks. While this is not the first work on applying CS to count and localize targets, it is the first to rigorously justify the validity of the problem formulation. Moreover, we propose a novel greedy matching pursuit algorithm (GMP) that complements the well-known signal recovery algorithms in CS theory and prove that GMP can accurately recover a sparse signal with a high probability. We also propose a framework for counting and positioning targets from multiple categories, a novel problem that has never been addressed before. Finally, we perform a comprehensive set of simulations whose results demonstrate the superiority of our approach over the existing CS and non-CS based techniques.
For single-carrier systems with frequency domain equalization, decision feedback equalization (DFE) performs better than linear equalization and has much lower computational complexity than sequence maximum likelihood detection. The main challenge in DFE is the feedback symbol selection rule. In this paper, we give a theoretical framework for a simple, sparsity based thresholding algorithm. We feed back multiple symbols in each iteration, so the algorithm converges fast and has a low computational cost. We show how the initial solution can be obtained via convex relaxation instead of linear equalization, and illustrate the impact that the choice of the initial solution has on the bit error rate performance of our algorithm. The algorithm is applicable in several existing wireless communication systems (SC-FDMA, MC-CDMA, MIMO-OFDM). Numerical results illustrate significant performance improvement in terms of bit error rate compared to the MMSE solution.
Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such asmagnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.
Phase Reconstruction Using Machine Learning For Wireless Tomography by S. J. Hou , Z. Hu , M. C. Wicks, and R. C. Qiu. The abstract reads:
This is the following paper in a series on a new initiative of wireless tomography. The goal is to combine two areas: wireless communication and radio tomography. This paper primarily focuses on phase reconstruction using machine learning for wireless tomography. When only communication components instead of sophisticated equipment are exploited to perform wireless tomography, phase information of the received field is hard to obtain. Thus self-coherent tomography is proposed, which has two main steps. First, phase reconstruction is achieved using the received amplitude only data. Second, the standard radio tomographic imaging algorithms are used for data analysis. However for the real application, the effect of noise can not be ignored. In order to improve the performance of phase reconstruction with the consideration of noise, a hybrid system for wireless tomography is proposed in this paper. Meanwhile, machine learning is explored here to execute the noise reduction.

We consider a multi-static radar scenario with spatially dislocated receivers that can individually extract delay information only. Furthermore, we assume that the receivers are not phase-synchronized, so the measurements across receivers can only be combined noncoherently. We cast this scenario as a compressive sensing reconstruction problem, where the vector of unknowns consists of complex baseband coefficients of tentative targets at discrete positions within a region of interest. The difference to previous work is that each receiver has to have a separate set of variables to account for the noncoherent measurement model. This leads to multiple reconstruction problems that are individually ill-defined, but can be regularized by a shared sparsity pattern, as studied in jointly- or block-sparse reconstruction problems. We evaluate this approach in a simple scenario with three receivers and three closely spaced targets. Using the popular basis pursuit and orthogonal matching pursuit algorithms, we find that targets can be fully resolved and that the position estimation error is close to the Cram´er-Rao lower bound based on an estimator that knows the number of targets.

Wideband Spectrum Sensing based on Sparse Channel State Recovery in Cognitive Radio Networks by Lei Shi, Zheng Zhou, Liang Tang. The abstract reads:
Motivated by the compressed sensing sparse channel estimation problem, the complete channel state is sparse under the conditions of low spectral efficiency. Other than traditional method of looking for the perception of spectrum holes, this paper focus on the sparse of occupied sub-channels. Based on compressed sensing technology, a novel cooperative wideband spectrum sensing method is proposed in the cognitive radio networks. The speed and the efficiency of the single CR node sensing process are improved through the sparse measurement matrix. The fusion center reconstructs the observed data by the turbo-decoding message-passing (TDMP) algorithm, dramatically reducing the computation complexity.

Also, behind a paywall or just abstracts, we have:

Spatial Compressive Sensing for Direction-of-Arrival Estimation of Multiple Sources Using Dynamic Sensor Arrays by I. Bilik. The abstract reads:
This work addresses the problem of direction-of-arrival (DoA) estimation of multiple sources using short and dynamic sensor arrays. We propose to utilize compressive sensing (CS) theory to reconstruct the high-resolution spatial spectrum from a small number of spatial measurements. Motivated by the physical structure of the spatial spectrum we model it as a sparse signal in the wavenumber-frequency domain, where the array manifold is proposed to serve as a deterministic sensing matrix. The proposed spatial CS (SCS) approach allows exploitation of the array orientation diversity (achievable via array dynamics) in the CS framework to address challenging array signal processing problems such as left-right ambiguity and poor estimation performance at endfire. The SCS is conceptually different from well-known classical and subspace-based methods because it provides high azimuth resolution using a short dynamic linear array without restricting requirements on the spatial and temporal stationarity and correlation properties of the sources and the noise. The SCS approach was shown to outperform current super-resolution and orientation diversity based methods in single-snapshot simulations with multiple sources.

Mission Design for Compressive Sensing with Mobile Robots by Robert Hummel, Sameera Poduri, Franz Hover, Urbashi Mitra and Gaurav Sukhatme. The abstract reads:
This paper considers mission design strategies for mobile robots whose task is to perform spatial sampling of a static environmental field, in the framework of compressive sensing. According to this theory, we can reconstruct compressible fields using O(log n) nonadaptive measurements (where n is the number of sites of the spatial domain), in a basis that is “incoherent” to the representation basis; random uncorrelated measurements satisfy this incoherence requirement. Because an autonomous vehicle is kinematically constrained and has finite energy and communication resources, it is an open question how to best design missions for CS reconstruction. We compare a two-dimensional random walk, a TSP approximation to pass through random points, and a randomized boustrophedon (lawnmower) strategy. Not unexpectedly, all three approaches can yield comparable reconstruction performance if the planning horizons are long enough; if planning occurs only over short time scales, the random walk will have an advantage.



The following information is posted here mostly as a heads-up. Some of the dates may have passed.

Speaker: Michael Trakimas, PhD Candidate, Host: Prof. Sameer Sonkusale. Abstract
This dissertation builds on the recent theoretical and experimental work on asynchronous sampling and compressed sensing. We discuss how advances in these fields can be exploited to design practical data acquisition systems capable of directly acquiring sparse signals at sub-Nyquist rates. To provide a connection to how this can benefit certain emerging technologies, we focus specifically on increasing the power efficiency and decreasing the complexity of the signal acquisition process compared to existing conventional Nyquist rate solutions for biomedical sensor and wideband spectrum sensing applications. The first half of this work presents the design and implementation of an adaptive resolution asynchronous analog-to-digital converter (ADC) targeted at biomedical signal acquisition applications such as ECG and EEG monitoring. The main contribution of this work was the implementation of an adaptive resolution algorithm which varied the quantizer resolution of the ADC with the slope of the input signal, in order to overcome the tradeoff between dynamic range and input bandwidth typically seen in asynchronous ADCs. The second half of this work presents the design and implementation of a compressed sensing based analog-to-information converter (AIC) for wideband spectrum sensing applications. The core of the design is an ultra low power moderate rate ADC that randomly samples the received signal at sub-Nyquist rates. Measurement results of the compressed sensing AIC for wideband spectrum sensing demonstrate improved power efficiency and reduced complexity compared to the conventional direct sampling architecture, while still maintaining high performance for signals with sparse frequency support.

A PhD studentship offer:at Chalmers in Goteborgin Sweden:
(110302) Large-deviation estimates and duality theory in compressive sensing

Large-deviation estimates and duality theory in compressive sensing

Abstract: Compressive sensing (CS) is a recently introduced method that enables sampling of sparse signals well below the Nyquist rate, while still guaranteeing near-perfect reconstruction. A fundamental problem in compressive sensing is to determine the number of samples (measurements) needed to guarantee near-perfect reconstruction, as a function of the sparsity level of the signal. Various results are available in the CS literature for different sensing mechanisms. Recently, a new, fairly general framework has been proposed, within which all previously known results in CS can be re-derived. This framework makes use of relatively simple results from large-deviation theory and duality in optimisation theory. The goal of this master thesis is to isolate and understand the fundamental tools underpinning this novel framework. Furthermore, these tools should be used to simplify recent results in the fields of matrix completion and robust principle component analysis.

Requirements The ideal candidate should have a strong background in probability theory and linear algebra, and should enjoy theoretical work. The master student will acquire an in-depth knowledge of compressive sensing, a hot topic at the intersection of signal processing, information theory and machine learning.

Contact Giuseppe Durisi (durisi at charmers dot se), phone: +46 31 772 18 02, Signals and systems department

Recommended reading:

[1] E. J. Candès and Y. Plan, “A probabilistic and RIPless theory of compressed sensing,” Nov. 2010. [Online]. Available: http://arxiv.org/abs/1011.3854

[2] D. Gross, “Recovering low-rank matrices from few coefficients in any basis,” Sep. 2010. [Online]. Available: http://arxiv.org/pdf/0910.1879v1

[3] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis? ” Dec. 2009. [Online]. Available: http://arxiv.org/abs/0912.3599
Major subjects Information Technology
Department Signals and Systems


Here is a presentation:

Low dimensional embedding with compressed, incomplete and inaccurate measurements
When
Start: 03/02/2011 - 4:15pm
End : 03/02/2011 - 5:00pm

Category: Colloquium

Speaker
Blake Hunter (University of California, Davis)

Abstract

As the size and complexity of data continues to grow, extracting knowledge becomes exponentially more challenging. Active areas of research for mining this high dimensional data can be found across a broad range of scientific fields including pure and applied mathematics, statistics, computer science and engineering. Spectral embedding is one of the most powerful and widely used techniques for extracting the underlying global structure of a data set. Compressed sensing and matrix completion have emerged as prevailing methods for efficiently recovering sparse and partially observed signals. In this talk, we combine the distance preserving measurements of compressed sensing and matrix completion with the robust power of spectral embedding. Our analysis provides rigorous bounds on how small perturbations from using compressed sensing and matrix completion affect the affinity matrix and in succession the spectral coordinates. Theoretical guarantees are complemented with numerical results. A number of examples of the unsupervised organization and clustering of synthetic and real world image data are also shown
Where
Roberts North 15, Claremont McKenna College

and a seminar:
ECE Seminar on This Afternoon
"Application of Compressive Sensing to Cognitive Radio and Digital Holo," by Sang (Peter) Chin, Cyber Operations Branch, JHU Applied Physics Lab. 3 p.m. on Thursday, March 3, in Hodson 316. Hosted by Electrical and Computer Engineering.

Refreshments will be served at 2:45 p.m.

FOR DISABILITY INFORMATION CONTACT:
Candace Abel (410) 516-7031 cabel@jhu.edu

Abstract:
One of the key aspects of cognitive radio is the concept of dynamic spectrum access, where a radio searches for a (temporarily) unused white space in order to transmit and receive its data. To enable such dynamic spectrum utilization, it is critical to detect the existence/absence of primary users, and furthermore understand the spectrum usage pattern of primary users. Currently, this is done by periodically switching the radio from the operation mode (transmit/receive) to sensing mode, during which the radio tries to sense for the existence of other (incumbent) signals. It is thus of utmost importance to make the sensing period as small as possible, which can not only increase the operational utilization, but also enable detection of more white spaces (including finer ones that can't be otherwise detected). We show that the nascent theory of compressive sensing, a revolutionary sampling paradigm which enables sampling at a much lower rate than the SyQuest rate by exploiting the sparsity of a given signal, can offer new opportunities for DSA. In particular, we show that compressive sensing theory is able to reduce significantly the amount of time and size of data that a cognitive radio needs in order to sense the existence of incumbent signals. Furthermore, we show that the recent progress on compressive sensing is also applicable and useful in a digital heliographic system, especially in that it greatly reduces the number of pixels (up to 80%) in hologram that is necessary to reconstruct the image without losing essential features of the image. This offers a path to sparsely sample the hologram and produce images with resolution comparable to the fully populated array, which in turn affects a holographic system to capture images at longer ranges using an array of sparsely populated smaller CCD arrays.

Bio:
Dr. Sang (Peter) Chin is currently a branch chief scientist of Cyber Operations Branch at Johns Hopkins Applied Physics Lab, where he is conducting research in the area of compressive sensing, data fusion, game theory, MHT tracking, quantum-game inspired cyber-security, and cognitive radio. Prior to joining JHU/APL, he was a Division CTO at SAIC, and before that, he was a technology manager for 90 nm technology at LSI Logic Corp., where he also helped to develop its first Embedded DRAM technology jointly with Hitachi Semiconductor in late 90’s. He received his PhD. in mathematics from MIT and is a Phi Beta Kappa graduate from Duke University where he was a faculty scholar with a triple major in EE, computer science and mathematics.


Credit photo: NASA/JPL/Space Science Institute, Beyond Southern Rhea, February 28, 2011

No comments:

Printfriendly