## Page Views on Nuit Blanche since July 2010

Please join/comment on the Google+ Community (1595), the CompressiveSensing subreddit (965), the Facebook page (83 likes), the LinkedIn Compressive Sensing group (3365) or the Advanced Matrix Factorization Group (1072)

## Monday, October 05, 2015

### Compressive Imaging in Scanning Transmission Electron Microscopy and Microwave Ghost Imaging

Here are two new hardware randomized acquisition of signals:

The concept of compressive sensing was recently proposed to significantly reduce the electron dose in scanning transmission electron microscopy (STEM) while still maintaining the main features in the image. Here, an experimental setup based on an electromagnetic shutter placed in the condenser plane of a STEM is proposed. The shutter blanks the beam following a random pattern while the scanning coils are moving the beam in the usual scan pattern. Experimental images at both medium scale and high resolution are acquired and then reconstructed based on a discrete cosine algorithm. The obtained results confirm the predicted usefulness of compressive sensing in experimental STEM even though some remaining artifacts need to be resolved.

Microwave Surveillance based on Ghost Imaging and Distributed Antennas
Xiaopeng Wang, Zihuai Lin

In this letter, we proposed a ghost imaging (GI) and distributed antennas based microwave surveillance scheme. By analyzing its imaging resolution and sampling requirement, the potential of employing microwave GI to achieve high-quality surveillance performance with low system complexity has been demonstrated. The theoretical analysis and effectiveness of the proposed microwave surveillance method are also validated via simulations.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### On the Complexity of Robust PCA and $\ell_1$-norm Low-Rank Matrix Approximation - implementation -

On the Complexity of Robust PCA and $\ell_1$-norm Low-Rank Matrix Approximation
Nicolas Gillis, Stephen A. Vavasis
The low-rank matrix approximation problem with respect to the component-wise $\ell_1$-norm ($\ell_1$-LRA), which is closely related to robust principal component analysis (PCA), has become a very popular tool in data mining and machine learning. Robust PCA aims at recovering a low-rank matrix that was perturbed with sparse noise, with applications for example in foreground-background video separation. Although $\ell_1$-LRA is strongly believed to be NP-hard, there is, to the best of our knowledge, no formal proof of this fact. In this paper, we prove that $\ell_1$-LRA is NP-hard, already in the rank-one case, using a reduction from MAX CUT. Our derivations draw interesting connections between $\ell_1$-LRA and several other well-known problems, namely, robust PCA, $\ell_0$-LRA, binary matrix factorization, a particular densest bipartite subgraph problem, the computation of the cut norm of $\{-1,+1\}$ matrices, and the discrete basis problem, which we all prove to be NP-hard.

An implementation is available here: https://sites.google.com/site/nicolasgillis/code. From the page:
This Matlab code implements an exact cyclic coordinate descent method for the component-wise ℓ1-norm matrix approximation problem: Given an m-by-n matrix M and a factorization rank r, find an m-by-r matrix U and an r-by-n matrix V such that ||M-UV||1 = sumi,j |M-UV|ij is minimized. By default, it initializes the algorithm with the optimal solution of the ℓ2-norm problem using the truncated singular value decomposition provided by Matlab.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Sunday, October 04, 2015

### Sunday Morning Insight: The Second Inflection Point in Genome Sequencing

This week, I was making a presentation for Company X and needed to keep abreast of the recent cost per genome figure computed at genome.gov. Well, thanks to a retweet by Michael, it looks like October 1 saw the Second Inflection point we've been waiting for. Time to change the slides and time to think about The Important Things after Commodity Sequencing.

Let us note that this cost figure still doesn't seem to include neither Pacific BioSciences RSII nor Oxford Nanopore technology cost.

Related:

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Saturday, October 03, 2015

### Ceci n'est toujours pas le site de la Nuit Blanche

Les attractions des Nuits Blanches de Paris se trouvent ici, celles de Bruxelles se trouvent ici et celles de Toronto ici.

Si par contre vous etes interessés par le déluge de données qui dépassent le nombre d'étoiles dans l'univers et comment les utiliser pour mieux comprendre le monde, bienvenue, vous êtes arrivés a bon port. Pour information, il y a depuis plus de deux ans, des meetups sur le Machine Learning a Paris.
L'un des membre de la communauté de Machine Learning, Samim Winiger devrait faire partie du programme d'une future Nuit Blanche, voici ces derniers billets qui parlent de ces activités ou il utilise des modèles sophisitiqués de machine learning pour produire des images de rêves et de comédie:
Voici certaines des vidéo générées par certain des modêles utilisés par Samim

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Saturday Morning Video: Gaussian Processes with Neil Lawrence

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, October 02, 2015

### FastEmbed: Compressive spectral embedding: sidestepping the SVD - implementation -

From the paper:
"...In this paper, we tackle these scalability bottlenecks by focusing on what embeddings are actually used for: computing ℓ2-based pairwise similarity metrics typically used for supervised or unsupervised learning. For example, K-means clustering uses pairwise Euclidean distances, and SVM-based classification uses pairwise inner products. We therefore ask the following question: “Is it possible to compute an embedding which captures the pairwise euclidean distances between the rows of the spectral embedding E= [f(σ1)u1···f(σk)uk], while sidestepping the computationally expensive partial SVD?” We answer this question in the affirmative by presenting a compressive algorithm which directly computes a low-dimensional embedding..."

Compressive spectral embedding: sidestepping the SVD by Dinesh Ramasamy, Upamanyu Madhow

Spectral embedding based on the Singular Value Decomposition (SVD) is a widely used "preprocessing" step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors and rescaling the coordinate axes (by a predefined function of the singular value). However, the number of such vectors required to capture problem structure grows with problem size, and even partial SVD computation becomes a bottleneck. In this paper, we propose a low-complexity it compressive spectral embedding algorithm, which employs random projections and finite order polynomial expansions to compute approximations to SVD-based embedding. For an m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)), and the embedding dimension is O(log(m+n)), both of which are independent of the number of singular vectors whose effect we wish to capture. To the best of our knowledge, this is the first work to circumvent this dependence on the number of singular vectors for general SVD-based embeddings. The key to sidestepping the SVD is the observation that, for downstream inference tasks such as clustering and classification, we are only interested in using the resulting embedding to evaluate pairwise similarity metrics derived from the euclidean norm, rather than capturing the effect of the underlying matrix on arbitrary vectors as a partial SVD tries to do. Our numerical results on network datasets demonstrate the efficacy of the proposed method, and motivate further exploration of its application to large-scale inference tasks.

A Python implementation of FastEmbed is available at: https://bitbucket.org/dineshkr/fastembed/src/NIPS2015

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### True BLAS-3 Performance QRCP using Random Sampling

Using directly randomized sampling in BLAS3 operations, wow !

True BLAS-3 Performance QRCP using Random Sampling by Jed A. Duersch, Ming Gu

The dominant contribution to communication complexity in factorizing a matrix using QR with column pivoting is due to column-norm updates that are required to process pivot decisions. We use randomized sampling to approximate this process which dramatically reduces communication in column selection. We also introduce a sample update formula to reduce the cost of sampling trailing matrices. Using our column selection mechanism we observe results that are comparable to those obtained from the QRCP algorithm, but with performance near unpivoted QR. We also demonstrate strong parallel scalability on shared memory multiple core systems using an implementation in Fortran with OpenMP.
This work immediately extends to produce low-rank truncated approximations of large matrices. We propose a truncated QR factorization with column pivoting that avoids trailing matrix updates which are used in current implementations of BLAS-3 QR and QRCP. Provided the truncation rank is small, avoiding trailing matrix updates reduces approximation time by nearly half. By using these techniques and employing a variation on Stewart's QLP algorithm, we develop an approximate truncated SVD that runs nearly as fast as truncated QR.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, October 01, 2015

### Nuit Blanche in Review ( September 2015 )

What happened since the last Nuit Blanche in Review ( August 2015 ) ?: Well besides knowing that liquid water has been found on Mars. We had a few implementations made available by their authors (and we are getting close to 400 such implementation since we started mentioning them here). We had three theses (congratulations to the new Ph.Ds), a few in-depth posts. We also started the Anomaly Detection tag,  have seen some discussions on Group Testing, found a new highly technical reference page, a new book, an interesting dataset, some jobs, a few videos and the presentations of the Season 3's first Paris Machine Learning Meetup. Enjoy !

Implementations

ML

Anomaly Detection

Group Testing
Jobs:
Other
Paris Machine Learning Meetup

Image credit: NASA/JPL-Caltech/UCLA/MPS/DLR/IDA
This image, taken by NASA's Dawn spacecraft, shows the surface of dwarf planet Ceres from an altitude of 915 miles (1,470 kilometers). The image was taken on August 24, 2015, and has a resolution of 450 feet (140 meters) per pixel.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Thesis: Unsupervised Feature Learning in Computer Vision by Rostislav Goroshin

Much of computer vision has been devoted to the question of representation through feature extraction. Ideal features transform raw pixel intensity values to a representation in which common problems such as object identification, tracking, and segmentation are easier to solve. Recently, deep feature hierarchies have proven to be immensely successful at solving many problems in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function of the data and problem specific label information. Recent findings suggest that despite being trained on a specific task, the learned features can be transferred across multiple visual tasks. These findings suggests that there exists a generically useful feature representation for natural visual data. his work aims to uncover the principles that lead to these generic feature representations in the unsupervised setting, which does not require problem specific label information. We begin by reviewing relevant prior work, particularly the literature on auto-encoder networks and energy based learning. We introduce a new regularizer for auto-encoders that plays an analogous role to the partition function in probabilistic graphical models. Next we explore the role of specialized encoder architectures for sparse inference. The remainder of the thesis explores visual feature learning from video. We establish a connection between slow-feature learning and metric learning, and exper- imentally demonstrate that semantically coherent metrics can be learned from natural videos. Finally, we posit that useful features linearize natural image transformations in video. To this end, we introduce a new architecture and loss for training deep feature hierarchies that linearize the transformations observed in unlabeled natural video sequences by learning to predict future frames in the presence of uncertainty.

From Ross'recent work

[1]*New* Learning to Linearize Under Uncertainty
Ross Goroshin, Michael Mathieu, Yann LeCun
[2] Unsupervised Learning of Spatiotemporally Coherent Metrics
Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, Yann LeCun
[3] Unsupervised Feature Learning from Temporal Data
Ross Goroshin, Joan Bruna, Arthur Szlam, Jonathan Tompson, David Eigen, Yann LeCun, NIPS 2014 Deep Learning Workshop, Montreal, QC and ICLR 2015 Workshop, San Diego, CA
[4] Efficient Object Localization Using Convolutional Networks
Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, Chris Bregler
[5] Saturating Auto-Encoders
Rostislav Goroshin and Yann LeCun, International Conference on Learning Representations (ICLR 2013), Scottsdale, AZ

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, September 30, 2015

### Training Deep Networks with Structured Layers by Matrix Backpropagation

Extending backpropagation to include matrix factorization in deep learning, interesting, the great convergence continues:

Training Deep Networks with Structured Layers by Matrix Backpropagation by Catalin Ionescu, Orestis Vantzos, Cristian Sminchisescu

Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.