Pages

Friday, January 31, 2014

From Direct Imaging to Machine Learning … and more, in 15 minutes

I have been to these Non Conventional Optical Imaging meetings at least three times and every time enjoyed them very much. This time, I decided to send a proposal in for a presentation at the next meeting. We'll see how that goes. If you are interested in that talk, let me know (and yes I think I lied about the 15 minutes but it's just between the two of us)
From Direct Imaging to Machine Learning … and more, in 15 minutes
De l'imagerie directe au Machine Learning … et plus, en 15 minutes 

Igor Carron 

We will present a panorama of sensing techniques from direct imaging all the way to Machine Learning. In particular, we will show that the traditional barriers between these fields are becoming porous and that in order to conceive new sensors, the use of the structure of the objects being observed as an a priori, is becoming central. Traditionally, this task used to be the domain of signal processing experts at the end of the data acquisition chain. For the past ten years, the use of a priori knowledge, such as sparsity, low-rankedness or more generally the manifold in which the signal “lives”, has enabled the development of new mathematical approaches and attendant numerical solution techniques [3,4]. In fact, the appearance of these tools has provided the applied and sometimes the not-so-applied mathematicians or the signal processing experts a direct say in the conception of new detectors. We will see in particular how the appearance of sharp phase transitions linked to how much information can be transmitted by the sensor, now provides new rules on the conception and calibration of conventional and non-conventional imaging sensors. This expository talk will use, in part, material from [1,2] and examples featured in [5]. 

Nous présenterons un panorama des différentes techniques qui permettent de capter l'information partant du capteur conventionnel en imagerie directe jusqu'aux techniques de Machine Learning utilisées dans l'apprentissage de données de très grandes dimensions. Nous verrons que les barrières deviennent de plus en plus poreuses entre ces domaines et que pour la conception de nouveaux capteurs, l'imagerie doit pouvoir utiliser la structure de l'objet étudié. Traditionnellement, cette connaissance a priori de l'objet observé a souvent été réservée au traitement du signal en fin de la chaine d'acquisition. Cette connaissance à priori de la structure du signal, parcimonie, rang faible ou l’appartenance a une sous variété, a permis depuis 10 ans (au mois près) un développement très rapide de nouveaux outils mathématiques et numériques [3,4]. L'irruption de ces outils permet maintenant au mathématicien et à l'ingénieur du traitement du signal d'être les initiateurs ou des parties prenantes de la conception de nouveaux détecteurs. Nous verrons en particulier comment l'apparition de transitions de phase abruptes, liées à l'information transmise en imagerie, donnent des règles claires sur la conception et la calibration de nouveaux imageurs conventionnels et non conventionnels. Cet exposé reprendra en partie des éléments et références mentionnés dans [1,2] et des exemples pris d'articles mentionnés dans [5]. 

Références : 

[1] Sunday Morning Insight: A Quick Panorama of Sensing from Direct Imaging to Machine Learning, http://nuit-blanche.blogspot.com/2013/06/sunday-morning-sunday-quick-panorama-of.html
[3] Advanced Matrix Factorization Jungle Page, https://sites.google.com/site/igorcarron2/matrixfactorizations
[4] The Big Picture in Compressive Sensing, https://sites.google.com/site/igorcarron2/cs


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Nuit Blanche in Review (January 2014)

Nuit Blanche has already seen 2,900th published entries so far... it's a marathon. This past month we've had quite a few implementations since last December:
We also added a few references in the following pages while looking back at the Popular Posts of 2013:
We had the first public data release of a nanopore like process ( Quantum Biosystems Provides Raw Data Access to New Sequencing Technology ) as well as other very interesting focused posts:
Paris Machine Learning Meetup:
Meetings:
Jobs:

Credit: ESA/NASA, SOHO Data

Thursday, January 30, 2014

Sparse Spikes Deconvolution with Continuous Basis-Pursuit - implementation -



Gabriel Peyre sent the following

I have setup a numerical tour to detail a simple implementation of the Continuous Basis-Pursuit method for sparse spikes deconvolution:
I hope you will find it useful. As usual, comments are welcome!

Thanks Gabriel

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 29, 2014

Additions to the Advanced Matrix Factorization Jungle Page



I am slow but I eventually took the hint from Francis Bach that many of the advanced methods used nowadays essentially boil down to some matrix Factorization or decomposition. Everybody who has done some linear algebra has heard about QR factorization and the famous Lapack subroutines that are powering most of the science and engineering workflows. However, in recent years, we have seen a slew of factorizations that added constraints on the factors of these known factorizations. 

I was pleasantly surprised to see recently that the Advanced Matrix Factorization Jungle had about 50 G+ "likes" something that puts it in between the Big Picture in Compressive Sensing (8 G+) and Nuit Blanche (118 G+). This got me thinking about how to extend a little bit the listing.

First, I have added compressive sensing as a subset of MMV
  • Multiple Measurement Vector (MMV) Y = A X with unknown X and rows of X are sparse.
    • Compressive Sensing, Y = A X with unknown X and rows of X are sparse, X is one column.
Also after watching the presentation of Rene Vidal [1] at the last First French-German Mathematical Image Analysis Conference, it struck me that I had not added his and others subspace clustering work. I also did not include the recent Random Kitchen Sinks approach, so I added these two sections:
  • Kernel Factorizations 
  • Subspace Clustering
Indeed for the Kernel Factorization, we are talking no less than the Random Kitchen Sinks, i.e. implementation of the separation of variables in kernel learning taking place in machine learning which also include all the Fast Mutipole methods (FMM) used in physics for the past twenty years. Let us note that the FMM changed our world substantially, let's bet that the same concept of variable separation does the same in the next few years in the Machine Learning area. In particular, if you step back a little, you'll notice that there is currently no FMM or Kernel factorization that puts an additional constraint on the factors of the decomposition. Think about it, does the FMM or the Random Kitchen sinks changes as result of knowing that the distributions of unknown are sparse, group-sparse and so on....And then there is the recent case that matrix factorization and its bounds might provide us with a clear insight on how to build nonlinear representation of the identity (neural networks) [2] :-) 

Here is what I added (subject to heavy modification),

Kernel Factorizations, Positive Kernel K(x,x') = G(f(x)*f'(x'))
Phase Transitions:

  • None so far.
Implementations:


Subspace Clustering
Phase Transitions:

  • None so far.

Implementations:
  • Sparse Q_ic_i , X_ic_i = 0, 1^Tc_i = 1,  C is unknown, Y is known; Sparse Manifold Clustering and Embedding by Ehsan Elhamifar, Rene Vidal, Sparse Manifold Clustering and Embedding (SMCE) is an algorithm based on sparse representation theory for clustering and dimensionality reduction of data lying in a union of nonlinear manifolds.



A Compressed Sensing Framework for Magnetic Resonance Fingerprinting



Mike Davies just sent me the following:

Hi Igor
....
Given you enthusiasm for the recent "magnetic resonance fingerprinting" paper, may I also take this opportunity to let you know of some new work that I have done with Gilles Puy, Yves Wiaux and Pierre Vandergheynst. Specifically we have considered the problem from a rigorous CS perspective. This gave us insight on the requirements of the excitation sequence, how to sub-sample k-space and a provably good reconstruction algorithm - Bloch response recovery via Iterated Projection (BLIP), a variant on IHT based on work of Thomas Blumensath. Simulations show significant performance gains over the already good MRF scheme. The relevant articles can be found at http://arxiv.org/abs/1312.2465 and http://arxiv.org/abs/1312.2457 . Enjoy!
All the best
Mike
Thanks Mike ! I love this statement :

The procedure works through a form of noise averaging. Although each individual image is very noisy, the noise is greatly reduced when the voxel sequences are projected onto the Bloch response manifold. However, this ignores the main tenet of compressed sensing - aliasing is not noise but interference and under the right circumstances it can be completely removed. We explore this idea next.
I also love the open questions at the very end. Here are the preprints: A Compressed Sensing Framework for Magnetic Resonance Fingerprinting by Mike Davies, Gilles Puy, Pierre Vandergheynst, Yves Wiaux

Inspired by the recently proposed Magnetic Resonance Fingerprinting (MRF) technique we develop a principled compressed sensing framework for quantitative MRI. The three key components are: a random pulse excitation sequence following the MRF technique; a random EPI subsampling strategy and an iterative projection algorithm that imposes consistency with the Bloch equations. We show that, as long as the excitation sequence possesses an appropriate form of persistent excitation, we are able to achieve accurate recovery the proton density, T1, T2 and off-resonance maps simultaneously from a limited number of samples.
and Compressed Quantitative MRI: Bloch Response Recovery through Iterated Projection by Mike Davies, Gilles Puy, Pierre Vandergheynst, Yves Wiaux
Inspired by the recently proposed Magnetic Resonance Fingerprinting technique, we develop a principled compressed sensing framework for quantitative MRI. The three key components are: a random pulse excitation sequence following the MRF technique; a random EPI subsampling strategy and an iterative projection algorithm that imposes consistency with the Bloch equations. We show that, as long as the excitation sequence possesses an appropriate form of persistent excitation, we are able to achieve accurate recovery of the proton density, T1, T2 and off-resonance maps simultaneously from a limited number of samples.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, January 28, 2014

Quantum Biosystems Provides Raw Data Access to New Sequencing Technology



Ever since writing this entry on Predicting the Future: The Steamrollers, I have been on the lookout for any type of improvements on next generation sequencing techniques such as the one using nanopores. If you look at the nanopore tag, you'll notice a dearth of data coming from actual hardware sensing. Similarly, a recent question on one of the linkedin group on NGS yielded very little in terms of actual data that people could try their machine learning algorithms on. In fact it pretty looked hopeless. Things changed yesterday: At PMWC, Quantum Biosystems decided to start a process of openly share data with the rest of the scientific community. Here is the press release: Quantum Biosystems demonstrates First Reads using Quantum Single Molecule Sequencing. From the press release (and its connection to the steamrollers)

....The platform allows the direct sequencing of single stranded DNA and RNA without labelling or modification, on silicon devices which can be produced on the same production lines as consumer grade integrated circuits. As the system uses no proteins or other reagents it is potentially ultra-low cost, enabling consumer level genome sequencing.


But more importantly, the data is at: http://www.quantumbiosystems.com/data/, with the note:
Raw quantum sequencing data are freely available for scientific and research use, allowing you, for example, to develop your own algorithms and software for quantum sequencing. We will add new data sets from time to time, updated as needed.
Hit the Data Download button, put your name/company/university/blog and you are good to go.

Here is the Data Usage policy that you need to be Ok with, as soon as you fill this in, you can get the data:

Data Usage Policy (January 26, 2014)
All data in this site belong to Quantum Biosystems. These pre-publication data are preliminary and may contain errors. The goal of our policy is that early release should enable the progress of science.
The data is published under a Creative Commons licence 
Attribution-NonCommercial 3.0 (CC-BY-NC 3.0)
http://creativecommons.org/licenses/by-nc/3.0/us/legalcode
Creative Commons license CC-BY-NC 3.0:
"A license whereby licensees may copy, distribute, display and perform the work and make derivative works based on it only for non-commercial purposes, only if they give the author or licensor the credits in the manner specified."
By accessing these data, you agree not to use for commercial use without any prior written consent of Quantum Biosystems.
2014 Quantum Biosystems

Disclaimer: at the time of this writing, I hold no stake in Quantum Biosystems

Join the CompressiveSensing subreddit or the Google+ Community and post there !

Matrix Factorization with Binary Components

So the Arora et al paper has really provoked a tsunami in the NMF world probably because like Suresh says, "One can think of the NMF as an l_1 variant of the SVD”. Another attractiveness of NMF was its easy, albeit very slow implementation. Let us say that currently we don't much implementation made available for the new crop of solvers. It's just a question of time I am sure ....


Matrix factorization with Binary Components by Martin Slawski, Matthias Hein, Pavlo Lutsik

Motivated by an application in computational biology, we consider low-rank matrix factorization with {0,1}-constraints on one of the factors and optionally convex constraints on the second one. In addition to the non-convexity shared with other matrix factorization schemes, our problem is further complicated by a combinatorial constraint set of size 2m⋅r, where mis the dimension of the data points and r the rank of the factorization. Despite apparent intractability, we provide - in the line of recent work on non-negative matrix factorization by Arora et al. (2012) - an algorithm that provably recovers the underlying factorization in the exact case with O(mr2r+mnr+r2n) operations for n datapoints. To obtain this result, we use theory around the Littlewood-Offord lemma from combinatorics.


Related:



Computing a Nonnegative Matrix Factorization -- Provably by Sanjeev Arora, Rong Ge, Ravi Kannan, Ankur Moitra

In the Nonnegative Matrix Factorization (NMF) problem we are given an n×m nonnegative matrix M and an integer r superior to 0. Our goal is to expressM as AW where A and W are nonnegative matrices of size n×r and r×m respectively. In some applications, it makes sense to ask instead for the product AW to approximate M -- i.e. (approximately) minimize$\norm{M - AW}_F$ where $\norm{}_F$ denotes the Frobenius norm; we refer to this as Approximate NMF. This problem has a rich history spanning quantum mechanics, probability theory, data analysis, polyhedral combinatorics, communication complexity, demography, chemometrics, etc. In the past decade NMF has become enormously popular in machine learning, where A and W are computed using a variety of local search heuristics. Vavasis proved that this problem is NP-complete. We initiate a study of when this problem is solvable in polynomial time:
1. We give a polynomial-time algorithm for exact and approximate NMF for every constant r. Indeed NMF is most interesting in applications precisely when r is small.
2. We complement this with a hardness result, that if exact NMF can be solved in time (nm)o(r), 3-SAT has a sub-exponential time algorithm. This rules out substantial improvements to the above algorithm.
3. We give an algorithm that runs in time polynomial in n, m and r under the separablity condition identified by Donoho and Stodden in 2003. The algorithm may be practical since it is simple and noise tolerant (under benign assumptions). Separability is believed to hold in many practical settings.
To the best of our knowledge, this last result is the first example of a polynomial-time algorithm that provably works under a non-trivial condition on the input and we believe that this will be an interesting and important direction for future work.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 27, 2014

Sparse Packetized Predictive Control for Networked Control over Erasure Channels




Sparse Packetized Predictive Control for Networked Control over Erasure Channels by Masaaki Nagahara, Daniel E. Quevedo, Jan Ostergaard

We study feedback control over erasure channels with packet-dropouts. To achieve robustness with respect to packet-dropouts, the controller transmits data packets containing plant input predictions, which minimize a finite horizon cost function. To reduce the data size of packets, we propose to adopt sparsity-promoting optimizations, namely, ell-1-ell-2 and ell-2-constrained ell-0 optimizations, for which efficient algorithms exist. We derive sufficient conditions on design parameters, which guarantee (practical) stability of the resulting feedback control systems when the number of consecutive packet-dropouts is bounded.


Saturday, January 25, 2014

Saturday Morning Video: CMOS Image Sensors: Tech Transfer from Saturn to your Cell Phone

If you are a reader of this blog, you are accustomed to seeing images from the Cassini probe around Saturn every so often. 

In this video, Eric Fossum recounts the story behind CMOS Image Sensors and the technology transfer that took place from imagers used to view Saturn on Cassini all the way to your cell Phone. Think about it, no imaging CMOS in 1995, billions of chips 19 years later. The attendant presentation is here.





Friday, January 24, 2014

Sharp Phase Transition for Joint Sparse Recovery (MMV)

Jeff Blanchard just sent me the following

Dear Igor, 
I saw in your January 22 post that you posed the question, "Isn't it time, we devise a universal phase transition diagram for MMV computations?" If you haven't seen it, I think you might be interested in a paper I wrote with three of my students,

Jeffrey D. Blanchard, Michael Cermak, David Hanle, Yirong Jing. "Greedy algorithms for Joint Sparse Recovery", IEEE Trans. Sig. Proc., in press, 2014.

While it does not resolve your question, we present weak, empirical recovery phase transitions for the MMV setting for simultaneous recovery variants of NIHT, HTP, and CoSaMP in addition to RIP-based, uniform recovery guarantees.

It should be online at TSP in the next week or so as the preprint version. In the meantime, you can get the preprint from my webpage.
All the best,
Jeff

Thanks Jeff ! Here is the paper: Greedy Algorithms for Joint Sparse Recovery by Jeff Blanchard, Michael Cermak, David Hanle, and , Yirong Jing
Abstract—Five known greedy algorithms designed for the single measurement vector setting in compressed sensing and sparse approximation are extended to the multiple measurement vector scenario: Iterative Hard Thresholding (IHT), Normalized IHT (NIHT), Hard Thresholding Pursuit (HTP), Normalized HTP (NHTP), and Compressive Sampling Matching Pursuit (CoSaMP). Using the asymmetric restricted isometry property (ARIP), sufficient conditions for all five algorithms establish bounds on the discrepancy between the algorithms’ output and the optimal row-sparse representation. When the initial multiple measurement vectors are jointly sparse, ARIP-based guarantees for exact recovery are also established. The algorithms are then compared via the recovery phase transition framework. The strong phase transitions describing the family of Gaussian matrices which satisfy the sufficient conditions are obtained via known bounds on the ARIP constants. The algorithms’ empirical weak phase transitions are compared for various numbers of multiple measurement vectors. Finally, the performance of the algorithms is compared against a known rank aware greedy algorithm, Rank Aware Simultaneous Orthogonal Matching Pursuit + MUSIC. Simultaneous recovery variants of NIHT, NHTP, and CoSaMP all outperform the rank-aware algorithm.

One figure, in particular, triggered my interest: figure 3c


because it has the same shape as the one we found experimentally in the lab with a random medium used as measurement matrix.


Figure 4 from this preprint

Those are not exact match because Jeff  et al's paper
  • produces phase transition for l=2, 5 and 10 while ours is for l = 3 (it is between their l=2 and l=5)
  • focuses on noiseless phase transitions (lab experiments are not noiseless).


it is noteworthy that if the phase transition showed l*m/n on the x-axis , it would show exact recovery for x (l*m/n)  less than 1 and this for l =2 or 3. We noted before that there is something interesting at l = 2 and 3 in a computational-only study featuring Imaging strong localized scatterers with sparsity promoting optimization.

As an aside, maybe the Advanced Matrix Factorization Jungle page ought to have a place for each of the factorization and papers covering their phase transitions. In fact, I just added these templates for each factorization:

Phase Transitions:
  • None so far.
Implementations:
Please kindly let me know by which paper I should remove the current "None so far" item.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, January 23, 2014

Sparse Matrix Factorization: Simple rules for growing neural nets and Provable Bounds for Learning Some Deep Representations

I cannot tell you how much I **love** the following two papers. Why ? because using Advanced Matrix Factorization one can foresee a much more direct understanding of the whole Panorama of Sensing while allowing these models to provide information Faster Than a Blink of an Eye. Furthermore, it paves the way to naturally see how phase transitions put a limit on these contructions in a rational way (Sunday Morning Insight: The Map Makers). Think of these papers as gateway papers. Wow.



Sparse Matrix Factorization by Behnam Neyshabur, Rina Panigrahy

We investigate the problem of factorizing a matrix into several sparse matrices and propose an algorithm for this under randomness and sparsity assumptions. This problem can be viewed as a simplification of the deep learning problem where finding a factorization corresponds to finding edges in different layers and values of hidden units. We prove that under certain assumptions for a sparse linear deep network with n nodes in each layer, our algorithm is able to recover the structure of the network and values of top layer hidden units for depths up to O~(n1/6). We further discuss the relation among sparse matrix factorization, deep learning, sparse recovery and dictionary learning.
The attendant slides are here. Of interest is the following slide for the hardware pêople


Highly relevant:

Provable Bounds for Learning Some Deep Representations by Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma
We give algorithms with provable guarantees that learn a class of deep nets in the generative model view popularized by Hinton and others. Our generative model is an n node multilayer neural net that has degree at most n for some \gamma less than 1 and each edge has a random edge weight in [-1; 1]. Our algorithm learns almost all networks in this class with polynomial running time. The sample complexity is quadratic or cubic depending upon the details of the model. The algorithm uses layerwise learning. It is based upon a novel idea of observing correlations among features and using these to infer the underlying edge structure via a global graph recovery procedure. The analysis of the algorithm reveals interesting structure of neural networks with random edge weights.
see also Rong Ge's PhD thesis: Provable Algorithms for Machine Learning Problems and


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 22, 2014

The Why and How of Nonnegative Matrix Factorization


Nicolas Gillis has been one of the people who has tried to make mathematical sense as to why an empirical matrix factorization has been so successful for the past 14 years. His latest arxiv preprint (see below) provides some perspectives on where we are in that respect. It is no secret that every paper using nonnegative matrix factorization has produced its own version of the NMF decomposition in some way or another. To call the ensemble a zoo would be, at the very least, charitable. I personally decided to call it a jungle. This sentiment of being lost in the jungle has had a curious side effect: the cannibalization of the semantics used for matrix factorization. Indeed, in many people's mind, Matrix Factorization and NMF hae become one and the same. This is a situation not unlike the one we faced in compressive sensing with the myriad of sparsity seeking solvers. In turn, this issue of being easily lost led me to write this blog entry pointing to a solution: If there is revolution in rank minimization and nobody hears about it, is this a revolution ? or this one. The remedy to this state of affairs in compressive sensing has been to use the "acid test" of the universal phase transitions. One can expect that at some point in time we'll surely get there with NMF as well. Without further due, here is the paper: The Why and How of Nonnegative Matrix Factorization by Nicolas Gillis (Update: version 2 is here , also Matlab code , attendant slides)
Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging --this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise --this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.

also relevant: Algorithms for nonnegative matrix and tensor factorizations: a unified view based on block coordinate descent framework by Jingu Kim, Yunlong He, Haesun Park

Abstract We review algorithms developed for nonnegative matrix factorization (NMF) and nonnegative tensor factorization (NTF) from a unified view based on the block coordinate descent (BCD) framework. NMF and NTF are low-rank approximation methods for matrices and tensors in which the low-rank factors are constrained to have only nonnegative elements. The nonnegativity constraints have been shown to enable natural interpretations and allow better solutions in numerous applications including text analysis, computer vision, and bioinformatics. However, the computation of NMF and NTF remains challenging and expensive due the constraints. Numerous algorithmic approaches have been proposed to efficiently compute NMF and NTF. The BCD framework in constrained non-linear optimization readily explains the theoretical convergence properties of several efficient NMF and NTF algorithms, which are consistent with experimental observations reported in literature. In addition, we discuss algorithms that do not fit in the BCD framework contrasting them from those based on the BCD framework. With insights acquired from the unified perspective, we also propose efficient algorithms for updating NMF when there is a small change in the
reduced dimension or in the data. The effectiveness of the proposed updating algorithms are validated experimentally with synthetic and real-world data sets. 




    Join the CompressiveSensing subreddit or the Google+ Community and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Tuesday, January 21, 2014

    Sparse Randomized Kaczmarz for Multiple Measurement Vectors - implementation -


    The Kaczmarz algorithm is popular for iteratively solving an overdetermined system of linear equations. However the traditional Kaczmarz algorithm has a linear convergence rate,a randomized version of the Kaczmarz algorithm was shown to converge exponentially. Recently an algorithm for finding sparse solution to a linear system of equations has been proposed based on weighted randomized Kaczmarz algorithm. These algorithms solves single measurement vector problem; however there are applications were multiple-measurements are available. In this work, the objective is to solve a multiple measurement vector problem with common sparse support by modifying the randomized Kaczmarz algorithm. We have also modeled the problem of face recognition from video as the multiple measurement vector problem and solved using our proposed technique. We have compared the proposed algorithm with state-of-art spectral projected gradient algorithm for multiple measurement vectors on both real and synthetic datasets. The Monte Carlo simulations confirms that our proposed algorithm have better recovery and convergence rate than the MMV version of spectral projected gradient algorithm under fairness constraints.
    I note from the paper:

    The algorithm works for both over-determined and under-determined system of equations.

    Isn't it time, we devise a universal phase transition diagram for MMV computations ? At least some people on the experimental side could see if it fits the best that can be achieved numerically.

    The implementation is on Matlab Central

    Join the CompressiveSensing subreddit or the Google+ Community and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Monday, January 20, 2014

    Far-field subwavelength imaging from a single broadband antenna in combined with strongly disordered medium

    So using one antenna and a random medium to locate "things" we've heard that one before :-) Lianlin and colleagues explore this new capability and even provide us a phase transition map, what else can you ask ?




    Far-field subwavelength imaging from a single broadband antenna in combined with strongly disordered medium by Lianlin Li, Fang Li, Tie Jun Cui

    The far-field subwavelength imaging is a challenging issue. In this letter we demonstrate numerically that the far-field subwavelength imaging of weakly scattering objects can be obtained by processing the data acquired by a single antenna, which benefits from the use of the strongly disordered medium. A mathematical model has been proposed for solving such problem based on the idea of sparse reconstruction. Moreover, this study leads to an important conclusion that the strongly disordered medium can serves as an efficient apparatus for the single-antenna compressive measurement, which shifts the complexity of devising compressive sensing (CS) hardware from the design, fabrication and electronic control. The proposed method and associated results can find applications in several imaging disciplines, such as optics, THz, RF or ultrasound imaging.


    In this letter we study the subwavelength imaging of sparse broadband sources inside a disordered medium by processing the data acquired by a single antenna. A mathematical model has been developed for solving such problem based on the idea of sparse reconstruction. We show that the strongly disordered medium can serves as an efficient apparatus for compressive measurement, which shifts the complexity of devising compressive sensing (CS) hardware from the design, fabrication and electronic control. The proposed method and associated results can find applications in several imaging disciplines, such as optics, THz, RF or ultrasound imaging.
    Highly relevant:



    We study theoretically light focusing at subwavelength scale inside a disordered strongly scattering open medium. We show that broadband time reversal at a single point antenna, in conjunction with near-field interactions and multiple scattering, produces spatial focusing with a quality comparable to that obtained in an ideal closed cavity. This gives new perspectives for super-resolved optical imaging and coherent control of single nanosources or absorbers in complex media.
    h/t Sylvain Gigan.


    Saturday, January 18, 2014

    "We already have cats under control": A summary of Paris Machine Learning Meetup #7


    "We already have cats under control"

    Following up on our first Q&A, Tom from VMX (a Machine Learning #Kickstarter campaign)  implemented a cat face detector using VMX. He also checked it against the PASCAL VOC 2007 dataset to find images with cats. From the blog entry:

    ...Pre-trained VMX Object Detectors
    VMX will come with a set of pre-trained models.  While the fun part of using VMX is training your own specialized detectors, we understand that that are scenarios where you want access to a library of pre-existing object detectors.  You'll be able to modify those detectors and customize them for your own use (such as taking a generic face detector and making it better by giving it examples of your own face).  If there is an object you utterly love and really want a pre-trained version in VMX, just send us a message.  We already have cats under control.
    Just imagine cat lovers using VMX to detect their cats intention during the day or ... the possibilities look endless.
    The 7th Parisian Machine Learning Meetup was initally held with a Machine Vision theme with: VMX , Atheer One,VLAD and What does it take to win the Kaggle/Yandex competition? even though the latter presentation was an impromptu one. It took place on Wednesday January 15th, 2014 at DojoEvents. Here is the video of the Meetup. The program and attendant presentations:
    I noted that the VMX project could handle local computation with about 20 objects requiring a good computer. For Atheer, I think the issue of battery is set aside as the glass seem to be aiming for a crowd next to a power plug. Kenji told us that the winning entry was based on LambdaMART beating out another of their Random Forest implementation. Patrick also mentioned a series of photos reconstructed from descriptors. That series is on Herve Jegou's page. While  Patrick pointed out it was information leakage, most folks in the crowd were simply wowed.


    Many more examples on this page.

    Needless to say, there is an uncanny visual parallel beween these reconstruction and the ones from MRI readings featured at the third meetup where we learned about mindreading with MRI thanks to Alexandre Gramfort. ( see also Dreaming reconstructions)

    During the off moment of the meetup, we had a discussion on counting things and how it should be part of the general ML generic knowledge. Sam Bessalah mentioned the large repository of information on counting things done by Debasish Gosh and I mentioned the recent entry on the same subject here on Nuit Blanche: Frugal Streaming and Sparse Recovery with Very Sparse Compressed Counting. I think we'll have a meetup on the subject in March. Stay tuned.

    Frank and I were so very impressed by how smooth the Skype Q&As went, we decided to expand the reach of these ML Meetups. Again stay tuned.

    Related Blog entries: 


    The meetup was slated to have 142 people but I counted about 110 people in all. Not bad. Next meetup event is February 12th. Don't forget about also the Paris Machine Learning LinkedIn group (225 members) where one can post jobs or find job postings. The archive of the previous meetings can be found here.


    Join the CompressiveSensing subreddit or the Google+ Community and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.