Pages

Wednesday, June 17, 2009

CS: Cramér–Rao bound, 3-D Upper Airway MRI Using CS, unveiling patterns in networks, Audio Source Separation, Concatenate and Boost


I hope you have all recovered from the mesmerizing paper featured the day before yesterday. You also know that part of the paper was featured in this video presentation at SMLS 09. At the same workshop Mark Plumbley and Marco Bevilacqua presented "Stagewise Polytope Faces Pursuit for Recovery of Sparse Representations". It is in this video at the 27 minutes mark. I also liked the pitch for the next poster ( Subspectral Algorithms for Sparse Learning by Baback Moghaddam, Yair Weiss, Shai Avidan ). All the videos of the SMLS 09 workshop are here.

Today, we also have three papers from the Rice Compressive Sensing resource that I have not covered yet and one papers from Arxiv and another from the interwebs:


The Cramér–Rao bound for sparse estimation by Zvika Ben-Haim, Yonina Eldar. The abstract reads:

The goal of this paper is to characterize the best achievable performance for the problem of estimating an unknown parameter having a sparse representation. Specifically, we consider the setting in which a sparsely representable deterministic parameter vector is to be estimated from measurements corrupted by Gaussian noise, and derive a lower bound on the mean-squared error (MSE) achievable in this setting. To this end, an appropriate definition of bias in the sparse setting is developed, and the constrained Cramer-Rao bound (CRB) is obtained. This bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values. Consequently, in the unbiased case, our bound is identical to the MSE of the oracle estimator. Combined with the fact that the CRB is achieved at high signal-to-noise ratios by the maximum likelihood technique, our result provides a new interpretation for the common practice of using the oracle estimator as a gold standard against which practical approaches are compared.


Accelerated Three-Dimensional Upper Airway MRI Using Compressed Sensing by Yoon-Chul Kim, Shrikanth S. Narayanan, Krishna Nayak. The abstract reads:

In speech-production research, three-dimensional (3D) MRI of the upper airway has provided insights into vocal tract shaping and data for its modeling. Small movements of articulators can lead to large changes in the produced sound, therefore improving the resolution of these data sets, within the constraints of a sustained speech sound (6–12 s), is an important area for investigation. The purpose of the study is to provide a first application of compressed sensing (CS) to high-resolution 3D upper airway MRI using spatial finite difference as the sparsifying transform, and to experimentally determine the benefit of applying constraints on image phase. Estimates of image phase are incorporated into the CS reconstruction to improve the sparsity of the finite difference of the solution. In a retrospective subsampling experiment with no sound production, 5 and 4 were the highest acceleration factors that produced acceptable image quality when using a phase constraint and when not using a phase constraint, respectively. The prospective use of a 5undersampled acquisition and phase constrained CS reconstruction enabled 3D vocal tract MRI during sustained sound production of English consonants /s/, //, /l/, and /r/ with 1.5 x 1.5 x 2.0 mm3 spatial resolution and 7 s of scan time.


A fast and compact method for unveiling significant patterns in high speed networks by T Bu, J Cao, A Chen, PPC Lee. The summary reads:

Identification of significant patterns in network traffic, such as IPs or flows that contribute large volume (heavy hitters) or introduce large changes (heavy changers), has many applications in accounting and network anomaly detection. As network speed and the number of flows grow rapidly, tracking per-IP or per-flow statistics becomes infeasible due to both the computational overhead and memory requirements. In this paper, we propose a novel sequential hashing scheme that requires only O(H log N) both in memory and computational overhead that are close to being optimal, where N is the the number of all possible keys (e.g., flows, IPs) and H is the maximum number of heavy keys. Moreover, the generalized sequential hashing scheme makes it possible to trade off among memory, update cost, and detection cost in a large range that can be utilized by different computer architectures for optimizing the overall performance. In addition, we also propose statistically efficient algorithms for estimating the values of heavy hitters and heavy changers. Using both theoretical analysis and experimental studies of Internet traces, we demonstrate that our approach can achieve the same accuracy as the existing methods do but using much less memory and computational overhead.



Matrix Completion from Noisy Entries by Raghunandan Keshavan, Andrea Montanari, and Sewoong Oh. The abstract reads:

Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the `Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan et al.(2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.
As mentioned earlier, the OptSpace algorithm implementation is here.


Benchmarking Flexible Adaptive Time-Frequency Transforms for Underdetermined Audio Source Separation by Andrew Nesbit, Emmanuel Vincent and Mark Plumbley. The abstract reads:

We have implemented several fast and flexible adaptive lapped orthogonal transform (LOT) schemes for underdetermined audio source separation. This is generally addressed by time-frequency masking, requiring the sources to be disjoint in the time-frequency domain.
We have already shown that disjointness can be increased via adaptive dyadic LOTs. By taking inspiration from the windowing schemes used in many audio coding frameworks, we improve on earlier results in two ways. Firstly, we consider non-dyadic LOTs which match the time-varying signal structures better. Secondly, we allow for a greater range of overlapping window profiles to decrease window boundary artifacts. This new scheme is benchmarked through oracle evaluations, and is shown to decrease computation time by over an order of magnitude compared to using very general schemes, whilst maintaining high separation performance and flexible signal adaptivity. As the results demonstrate, this work may find practical applications in high fidelity audio source separation.




Concatenate and Boost for Multiple Measurement Vector Problems by Ok Kyun Lee, Jong Chul Ye. The abstract reads:

Multiple measurement vector (MMV) problem addresses the recovery of a set of sparse signal vectors that share common non-zero support, and has emerged an important topics in compressed sensing. Even though the fundamental performance limit of recoverable sparsity level has been formally derived, conventional algorithms still exhibit significant performance gaps from the theoretical bound. The main contribution of this paper is a novel concatenate MMV and boost (CoMBo) algorithm that achieves the theoretical bound. More specifically, the algorithm concatenates MMV to a larger dimensional SMV problem and boosts it by multiplying random orthonormal matrices. Extensive simulation results demonstrate that CoMBo outperforms all existing methods and achieves the theoretical bound as the number of measurement vector increases.

No comments:

Post a Comment