Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1502), the CompressiveSensing subreddit (811), the Facebook page, the LinkedIn Compressive Sensing group (3293) or the Advanced Matrix Factorization Group (1017)

Friday, May 22, 2015

PCANet: A Simple Deep Learning Baseline for Image Classification? - implementation -

Iteration of matrix factorizations as a way to build deep architectures. Interesting !



PCANet: A Simple Deep Learning Baseline for Image Classification? by Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng, Yi Ma

In this work, we propose a very simple deep learning network for image classification which comprises only the very basic data processing components: cascaded principal component analysis (PCA), binary hashing, and block-wise histograms. In the proposed architecture, PCA is employed to learn multistage filter banks. It is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus named as a PCA network (PCANet) and can be designed and learned extremely easily and efficiently. For comparison and better understanding, we also introduce and study two simple variations to the PCANet, namely the RandNet and LDANet. They share the same topology of PCANet but their cascaded filters are either selected randomly or learned from LDA. We have tested these basic networks extensively on many benchmark visual datasets for different tasks, such as LFW for face verification, MultiPIE, Extended Yale B, AR, FERET datasets for face recognition, as well as MNIST for hand-written digits recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state of the art features, either prefixed, highly hand-crafted or carefully learned (by DNNs). Even more surprisingly, it sets new records for many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST variations. Additional experiments on other public datasets also demonstrate the potential of the PCANet serving as a simple but highly competitive baseline for texture classification and object recognition.
An implementation of PCAnet is on Tsung-Han's source code page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Four million page views: a million here, a million there and soon enough we're talking real readership...

 
 
I know it's just a number but there is some Long Distance Blogging behind with about a million page views per year. Here are the historical figures:
a page view is not the same as a "unique visit", here is that figure:

 which amounts to 650 unique visits per day on average, a number that is consistent with Google's sessions numbers.
Here as some interesting tags developed over the years:
  • CS (2161) for Compressive Sensing
  • MF (514) for Matrix Factorization
  • implementation (355) features work that has code implementation associated with them.
  • ML (208)  for Machine Learning
 the social network "extension" of the blog:
But also 
Finally, the Paris Machine Learning Meetup
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 21, 2015

The Great Convergence: FlowNet: Learning Optical Flow with Convolutional Networks

The great convergence is upon us, here is clue #734: Andrew Davison mentioning recent work in optical flow using CNNs.

Whoa, this is a wake up call... CNN based learned optical flow (trained on synthetic flying chairs!) running at 10fps on a laptop which claims state of the art accuracy among real-time optical flow methods. So time for those of us working on non learning-based vision to pack up and go home?
This is a pretty powerful statement from one of the specialist of SLAM. Here is the paper:
 
 
 
 

FlowNet: Learning Optical Flow with Convolutional Networks by Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox

Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSjob: Post-Doc on Structured Low-Rank Approximations, Grenoble, France

Julien Mairal just sent me the following annoucement:

Hi Igor,


here is a call for a post-doc for an ANR project.
http://lear.inrialpes.fr/people/mairal/resources/pdf/postdoc_macaron.pdf
When you have time, could you advertise it on your blog ? This is about local low-rank approximations for applications in bioinformatics and image processing. Thus, this would be a good match for nuit blanche !


Best regards.

 from the announcement:


Research Topic and Objectives:
The goal of the MACARON project is to use data for solving scientific problems and automatically converting data into scientific knowledge by using machine learning techniques. We propose a research direction motivated by applications in bioinformatics and image processing. Low-rank matrix approximation is a popular tool for building web recommender systems [1] and plays an important role in large-scale classification problems in computer vision [2]. In many applications, we need however a different point of view. Data matrices are not exactly low-rank, but admit local low-rank structures [3]. This shift of paradigm is expected to achieve groundbreaking improvements over the classical low-rank paradigm, but it raises significant challenges that should be solved during the post-doc. The first objective is to develop new methodological tools to efficiently learn local low-rank structures in data. This will require both modeling skills (designing the right model) and good knowledge of optimization techniques (for efficient learning). The second objective is to adaptthese tools to genomic imputation problems and inverse problems in image processing

 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Low-rank Modeling and its Applications in Image Analysis




Xiaowei Zhou sent me the following the other day:


Dear Dr Carron,

We had the following survey paper published several months ago:

X. Zhou, C. Yang, H. Zhao, W. Yu. Low-Rank Modeling and its Applications in Image Analysis. ACM Computing Surveys, 47(2): 36, 2014. (http://arxiv.org/abs/1401.3409)

Could you kindly post it on your matrix factorization jungle website? I hope it will be helpful to some new comers.

Thanks,

Xiaowei

Thanks Xiaowei ! Here is the review that I will shortly add to the Advanced Matrix Factorization Jungle page. 

Low-rank Modeling and its Applications in Image Analysis by Xiaowei Zhou, Can Yang, Hongyu Zhao, Weichuan Yu . ACM Computing Surveys, 47(2): 36, 2014.
Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.

From the paper:

In this paper, we have introduced the concept of low-rank modeling and reviewed some representative low-rank models, algorithms and applications in image analysis. For additional reading on theories, algorithms and applications, the readers are referred to online documents such as the Matrix Factorization Jungle3 and the Sparse and Low-rank Approximation Wiki4, which are updated on a regular basis. 
Yes !
I also note that in the Robust PCA comparison, GoDec does consistently better than the other solvers. GoDec also happens being the reason Cable and I used it in "It's CAI, Cable And Igor's Adventures in Matrix Factorization ". Here is an example: CAI: A Glimpse of Lana and Robust PCA

 
 
More can be found here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 20, 2015

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems - implementation -

So phase retrieval can actually be fast and with near sample complexity ! Wow !
 

We consider the fundamental problem of solving quadratic systems of equations in variables, where , and is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data and as soon as the ratio between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size — hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size.
The attendant code is here: http://web.stanford.edu/~yxchen/TWF/
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 19, 2015

Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints

Here is some new sample complexity results for blind deconvolution, a certain kind of matrix factorization technique.


Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints by Yanjun Li, Kiryung Lee, Yoram Bresler

Blind deconvolution (BD), the resolution of a signal and a filter given their convolution, arises in many applications. Without further constraints, BD is ill-posed. In practice, subspace or sparsity constraints have been imposed to reduce the search space, and have shown some empirical success. However, existing theoretical analysis on uniqueness in BD is rather limited. As an effort to address the still mysterious question, we derive sufficient conditions under which two vectors can be uniquely identified from their circular convolution, subject to subspace or sparsity constraints. These sufficient conditions provide the first algebraic sample complexities for BD. We first derive a sufficient condition that applies to almost all bases or frames. For blind deconvolution of vectors in $\mathbb{C}^n$, with two subspace constraints of dimensions $m_1$ and $m_2$, the required sample complexity is $n\geq m_1m_2$. Then we impose a sub-band structure on one basis, and derive a sufficient condition that involves a relaxed sample complexity $n\geq m_1+m_2-1$, which we show to be optimal. We present the extensions of these results to BD with sparsity constraints or mixed constraints, with the sparsity level replacing the subspace dimension. The cost for the unknown support in this case is an extra factor of 2 in the sample complexity.
 

Image Credit: NASA/JPL-Caltech
This image was taken by Navcam: Left B (NAV_LEFT_B) onboard NASA's Mars rover Curiosity on Sol 987 (2015-05-17 08:39:24 UTC).
Full Resolution 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tensor time: Adaptive Higher-order Spectral Estimators / Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion

 
Adaptive Higher-order Spectral Estimators by David Gerard, Peter Hoff

Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this approach to the estimation of a tensor of parameters from noisy tensor data. We develop new classes of estimators that shrink or threshold the mode-specific singular values from the higher-order singular value decomposition. These classes of estimators are indexed by tuning parameters, which we adaptively choose from the data by minimizing Stein's unbiased risk estimate. In particular, this procedure provides a way to estimate the multilinear rank of the underlying signal tensor. Using simulation studies under a variety of conditions, we show that our estimators perform well when the mean tensor has approximately low multilinear rank, and perform competitively when the signal tensor does not have approximately low multilinear rank. We illustrate the use of these methods in an application to multivariate relational data.
Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion by Qibin Zhao, Liqing Zhang, Andrzej Cichocki

Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion. The most challenging problem is related to determination of model complexity (i.e., multilinear rank), especially when noise and missing data are present. In addition, existing methods cannot take into account uncertainty information of latent factors, resulting in low generalization performance. To address these issues, we present a class of probabilistic generative Tucker models for tensor decomposition and completion with structural sparsity over multilinear latent space. To exploit structural sparse modeling, we introduce two group sparsity inducing priors by hierarchial representation of Laplace and Student-t distributions, which facilitates fully posterior inference. For model learning, we derived variational Bayesian inferences over all model (hyper)parameters, and developed efficient and scalable algorithms based on multilinear operations. Our methods can automatically adapt model complexity and infer an optimal multilinear rank by the principle of maximum lower bound of model evidence. Experimental results and comparisons on synthetic, chemometrics and neuroimaging data demonstrate remarkable performance of our models for recovering ground-truth of multilinear rank and missing entries. 
 


 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, May 18, 2015

Blue Skies: Foundational principles for large scale inference, Stochastic Simulation and Optimization Methods in Signal Processing, Streaming and Online Data Mining , Kernel Models and more.

The following papers and presentations provide a bird's eye view as to where we are on specific topics related to some of the issues discussed here on Nuit Blanche. Enjoy ! 


Foundational principles for large scale inference: Illustrations through correlation mining by Alfred O. Hero, Bala Rajaratnam

When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number $n$ of acquired samples (statistical replicates) is far fewer than the number $p$ of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size $n$ is fixed, and the dimension $p$ grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

Modern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. Expectations and demands are constantly rising, and SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are analytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This paper presents a tutorial on stochastic simulation and optimization methods in signal and image processing and points to some interesting research problems. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, areas of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed.

Streaming and Online Data Mining by Edo Liberty
The talk provides a quick introduction to streaming and online data mining algorithms. These algorithms are required to summarize, process, or act upon an arbitrary sequence of events (data records). At every point in time, future events/data are unknown and past event are too numerous to store. While this computational model is severely restricting, it is, de facto, the working model in many large scale data systems. This talk introduces some classic and some new results in the field and show how they apply to email threading, news story categorization, clustering, regression, and factor or principal component analysis.

Also from Edo Liberty, here is this presentation on Low Rank Approximation of Matrices.
And finally, from Johan Suykens' main page:

  • "Kernel methods for complex networks and big data": invited lecture at Statlearn 2015, Grenoble 2015: [pdf]
  • "Fixed-size Kernel Models for Big Data": invited lectures at BigDat 2015, International Winter School on Big Data, Tarragona, Spain 2015:
  • - Part I: Support vector machines and kernel methods: an introduction [pdf]
  • - Part II: Fixed-size kernel models for mining big data [pdf] [video]
  • - Part III: Kernel spectral clustering for community detection in big data networks [pdf]
  • Dec 11, 2014: "Fixed-size kernel methods for data-driven modelling": plenary talk at ICLA 2014, International Conference on Learning and Approximation, Shanghai China 2014 [pdf]
  • "Fixed-size kernel methods for data-driven modelling": plenary talk at ICLA 2014, International Conference on Learning and Approximation, Shanghai China 2014 [pdf]
  • "Kernel-based modelling for complex networks": plenary talk at NOLTA 2014, International Symposium on Nonlinear Theory and its Applications, Luzern Switzerland 2014 [pdf]
  • "Learning with matrix and tensor based models using low-rank penalties": invited talk at Workshop on Nonsmooth optimization in machine learning, Liege Belgium 2013 [pdf]
  • Invited lecture series - Leerstoel VUB 2012 [pdf]
  • Advanced data-driven black-box modelling - inaugural lecture [pdf]
  • Support vector machines and kernel methods in systems, modelling and control [pdf]
  • Data-driven modelling for biomedicine and bioinformatics [pdf]
  • Kernel methods for exploratory data analysis and community detection [pdf]
  • Complex networks, synchronization and cooperative behaviour [pdf]
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tensor sparsification via a bound on the spectral norm of random tensors

Tensor sparsification as a way to do dimensionality reduction:

Tensor sparsification via a bound on the spectral norm of random tensors by Nam H. Nguyen, Petros Drineas, Trac D. Tran

Given an order-d tensor $\tensor A \in \R^{n \times n \times...\times n}$, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of $\tensor A$, keeps all sufficiently large elements of $\tensor A$, and retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a powerful inequality that we derive. This inequality bounds the spectral norm of a random tensor and is of independent interest. As a result, we obtain novel bounds for the tensor sparsification problem.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly