Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1407), the CompressiveSensing subreddit (714),
the LinkedIn Compressive Sensing group (3214) or the Advanced Matrix Factorization Group (1006)

Monday, March 02, 2015

The generalized Lasso with non-linear observations


The generalized Lasso with non-linear observations by Yaniv Plan, Roman Vershynin

We study the problem of signal estimation from non-linear observations when the signal belongs to a low-dimensional set buried in a high-dimensional space. A rough heuristic often used in practice postulates that non-linear observations may be treated as noisy linear observations, and thus the signal may be estimated using the generalized Lasso. This is appealing because of the abundance of efficient, specialized solvers for this program. Just as noise may be diminished by projecting onto the lower dimensional space, the error from modeling non-linear observations with linear observations will be greatly reduced when using the signal structure in the reconstruction. We allow general signal structure, only assuming that the signal belongs to some set K in R^n. Our theory tolerates general non-linearity, which may be discontinuous, not one-to-one and even unknown. We assume a random Gaussian model for the measurement matrix, but allow the rows to have an unknown covariance matrix. As special cases of our results, we recover near-optimal theory for noisy linear observations, and also give the first theoretical accuracy guarantee for 1-bit compressed sensing with unknown covariance matrix of the measurement vectors.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

On the Equivalence between Quadrature Rules and Random Features


We show that kernel-based quadrature rules for computing integrals are a special case of random feature expansions for positive definite kernels for a particular decomposition that always exists for such kernels. We provide a theoretical analysis of the number of required samples for a given approximation error, leading to both upper and lower bounds that are based solely on the eigenvalues of the associated integral operator and match up to logarithmic terms. In particular, we show that the upper bound may be obtained from independent and identically distributed samples from a known non-uniform distribution, while the lower bound if valid for any set of points. Applying our results to kernel-based quadrature, while our results are fairly general, we recover known upper and lower bounds for the special cases of Sobolev spaces. Moreover, our results extend to the more general problem of full function approximations (beyond simply computing an integral), with results in L2- and L∞-norm that match known results for special cases. Applying our results to random features, we show an improvement of the number of random features needed to preserve the generalization guarantees for learning with Lipschitz-continuous losses.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 27, 2015

Approximate Message Passing with Restricted Boltzmann Machine Priors

The great convergence is upon us. From the following paper:

The present paper demonstrates the first steps in incorporating deep learned priors into generalized linear problems such as compressed sensing.


 
Approximate Message Passing with Restricted Boltzmann Machine Priors by Eric W. Tramel, Angélique Drémeau, Florent Krzakala
Approximate Message Passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problem. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a Restricted Boltzmann Machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple iid priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner Formula

 
 
Stéphane Chrétien just sent me the following:
 
Dear Igor,
 
I hope you are doing fine !
 
I recently read the following paper:

http://arxiv.org/abs/1411.6265

which uses techniques such as the Stein method for proving a CLT explaining precisely the asymptotical phase transition phenomenon in compressed sensing with gaussian measurements. The readers of Nuit Blanche might also be interested in this research.

Best regards, 
Thank you Stéphane !


Intrinsic volumes of convex sets are natural geometric quantities that also play important roles in applications, such as linear inverse problems with convex constraints, and constrained statistical inference. It is a well-known fact that, given a closed convex cone C⊂Rd, its conic intrinsic volumes determine a probability measure on the finite set {0,1,...d}, customarily denoted by L(VC). The aim of the present paper is to provide a Berry-Esseen bound for the normal approximation of L(VC), implying a general quantitative central limit theorem (CLT) for sequences of (correctly normalised) discrete probability measures of the type L(VCn), n≥1. This bound shows that, in the high-dimensional limit, most conic intrinsic volumes encountered in applications can be approximated by a suitable Gaussian distribution. Our approach is based on a variety of techniques, namely: (1) Steiner formulae for closed convex cones, (2) Stein's method and second order Poincar\'e inequality, (3) concentration estimates, and (4) Fourier analysis. Our results explicitly connect the sharp phase transitions, observed in many regularised linear inverse problems with convex constraints, with the asymptotic Gaussian fluctuations of the intrinsic volumes of the associated descent cones. In particular, our findings complete and further illuminate the recent breakthrough discoveries by Amelunxen, Lotz, McCoy and Tropp (2014) and McCoy and Tropp (2014) about the concentration of conic intrinsic volumes and its connection with threshold phenomena. As an additional outgrowth of our work we develop total variation bounds for normal approximations of the lengths of projections of Gaussian vectors on closed convex sets.  
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 26, 2015

Tutorial: Submodularity in Machine Learning Applications

Here is a tutorial on Submodularity in Machine Learning Applications by Jeff Bilmes.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

RSVDPACK: Subroutines for computing partial singular value decompositions via randomized sampling on single core, multi core, and GPU architectures - implementation -


RSVDPACK: Subroutines for computing partial singular value decompositions via randomized sampling on single core, multi core, and GPU architectures by Sergey Voronin, Per-Gunnar Martinsson

This document describes an implementation in C of a set of randomized algorithms for computing partial Singular Value Decompositions (SVDs). The techniques largely follow the prescriptions in the article "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," N. Halko, P.G. Martinsson, J. Tropp, SIAM Review, 53(2), 2011, pp. 217-288, but with some modifications to improve performance. The codes implement a number of low rank SVD computing routines for three different sets of hardware: (1) single core CPU, (2) multi core CPU, and (3) massively multicore GPU.  
The implementations are on Sergey Voronin's GitHub: More specifically here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, February 25, 2015

Unsupervised Learning of Acoustic Features Via Deep Canonical Correlation Analysis





It has been previously shown that, when both acoustic and articulatory training data are available, it is possible to improve phonetic recognition accuracy by learning acoustic features from this multi-view data with canonical correlation analysis (CCA). In contrast with previous work based on linear or kernel CCA, we use the recently proposed deep CCA, where the functional form of the feature mapping is a deep neural network. We apply the approach on a speaker-independent phonetic recognition task using data from the University of Wisconsin X-ray Microbeam Database. Using a tandem-style recognizer on this task, deep CCA features improve over earlier multi-view approaches as well as over articulatory inversion and typical neural network-based tandem features. We also present a new stochastic training approach for deep CCA, which produces both faster training and better-performing features.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Scalable audio separation with light Kernel Additive Modelling - implementation -

Using randomization to scale some capabilities on low power systems, this is what we have today: Scalable audio separation with light Kernel Additive Modelling by Antoine Liutkus, Derry Fitzgerald, Zafar Rafii
Recently, Kernel Additive Modelling (KAM) was proposed as a unified framework to achieve multichannel audio source separation. Its main feature is to use kernel models for locally describing the spectrograms of the sources. Such kernels can capture source features such as repetitivity, stability over time and/or frequency, self-similarity, etc. KAM notably subsumes many popular and effective methods from the state of the art, including REPET and harmonic/percussive separation with median filters. However, it also comes with an important drawback in its initial form: its memory usage badly scales with the number of sources. Indeed, KAM requires the storage of the full-resolution spectrogram for each source, which may become prohibitive for full-length tracks or many sources. In this paper, we show how it can be combined with a fast compression algorithm of its parameters to address the scalability issue, thus enabling its use on small platforms or mobile devices.  
The implementation is one Antoine Liutkus' KAML page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 24, 2015

Crossing the P-river (follow-up)

A follow up to yesterday's blog entry. How much time did it take to gather all the information from the entire bacterial genome using a USB connected instrument that looks like a large memory stick and featured in the paper in Crossing the P-river: A complete bacterial genome assembled de novo using only nanopore sequencing data ?

 


Wow ! You're seeing history in the making.
 
Credit photo: this forum thread
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressive spectrum sensing of radar pulses based on photonic techniques

 
 
Here is another instance of Compressive Sensing hardware: Compressive spectrum sensing of radar pulses based on photonic techniques by Qiang Guo, Yunhua Liang, Minghua Chen, Hongwei Chen, and Shizhong Xie

We present a photonic-assisted compressive sampling (CS) system which can acquire about 106 radar pulses per second spanning from 500 MHz to 5 GHz with a 520-MHz analog-to-digital converter (ADC). A rectangular pulse, a linear frequency modulated (LFM) pulse and a pulse stream is respectively reconstructed faithfully through this system with a sliding window-based recovery algorithm, demonstrating the feasibility of the proposed photonic-assisted CS system in spectral estimation for radar pulses. 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly