Pages

Monday, June 30, 2014

Nuit Blanche in Review (June 2014)


Since the last Nuit Blanche in Review, we had quite a few implementations as well as some very interesting meetups/meetings (check the video section). 

Implementations

Some additional thoughts:


Other papers:
Meetings
Meetups
Videos: 
CfP :

Saturday Morning Videos:

More
Credit: ESA/NASA SOHO EIT171

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CfP: International Biomedical and Astronomical Signal Processing (BASP) Frontiers workshop 2015

Yves Wiaux just sent me the following:

Hi Igor
Time has come for BASP Frontiers 2015 already. May I ask you to post this announcement for me on nuit blanche? Thanks a lot in advance
Regards
Yves

Sure Yves, here it is:


******

Dear Colleagues

The International Biomedical and Astronomical Signal Processing (BASP) Frontiers workshop 2015 will take place in our traditional Swiss resort close to Lake Geneva on January 25 - 30, 2015.

This year's workshop will focus on imaging science, from theory to astronomy and medicine.

30 poster slots are open for contributions: ~3 in each of the 9 sessions of the workshop. These posters are deluxe versions with high visibility, as only few posters will be highlighted after each session during the 45-minutes morning coffee or evening aperitif. We expect contributions from both senior researchers and students. Talks and some posters are by invitation only.

The abstract submission window will be September 01-26 2014.

Do not hesitate to disseminate this announcement among your colleagues and students.

All details on science, organisation and logistics on… baspfrontiers.org .

Also check regularly for updates on participant list and other news.

On behalf of the chairs (M Lustig, Berkeley; JD McEwen UCL; Y Wiaux Heriot-Watt / EPFL)

Yves Wiaux

******
___________________________________
Dr Yves Wiaux, Assoc. Prof., BASP Director
Institute of Sensors, Signals & Systems
School of Engineering & Physical Sciences
Heriot-Watt University, Edinburgh


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, June 28, 2014

Saturday Morning Videos: Data Science and Massive Data Analysis, ESIEE Paris

I attended this meeting two weeks ago and now the videos are out. Thank you Laurent and Guillaume for your work and invitation. All the videos are listed below. 

 
Yann LeCun, Learning hierarchical structures
   
Nicolas Le Roux, Inside Criteo's Engine: Learning and predicting at scale  
Lorenzo Rosasco, Early stopping regularization  
 Nicolas Usunier, Learning Structure-Preserving Vector Representations of Objects from Open-Domain Relational Data
 
Emilie Chouzenoux, Proximal methods: tools for solving inverse problems on a large scale
   
 Francis Bach, Beyond stochastic gradient descent for large-scale machine learning 
 Cédric Archambeau, How Amazon leverages massive data sets in retail, web services and devices  
 Pascal Bianchi, Distributed optimization algorithms for multi-agent systems


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, June 27, 2014

Random Branches

It's Friday afternoon, it's Hamming's time. Let us push the argument of About t'em Random Projections in Random Forests further.

Leo Breiman, and many others afterwards, were concerned about linking Adaboost with Random Forests (see 7. Conjecture: Adaboost is a Random Forest). Let us look at it some other way, Random Forests are a set of randomized trees built node by node based on the sequential determination of an optimal split of groups of elements according to a certain metric (entropy, variance reduction, etc...). If you focus on one tree, you'll notice that what you see is a greedy algorithm which at every step computes the optimal spitting parameter. The algorithm is randomized in the sense that the choise of features being used at every iteration step is randomized. In effect, a Random Forest can only be seen as a parallel implementation of several greedy randomized iterations. In a sense, a Random Forest is one instance of several randomized Adaboosts.

Why is this interesting to say those things ?

Well ever since Ali Rahimi and Benjamin Recht came out with the idea of Random Kitchen Sinks, we know that we can replace the greedy Adaboost with a more parallel algorithm. Whereas Adaboost greedily search for a new basis function at every step of its iteration with the objective function of reducing the l2 norm between functions of the training sets and their classification results, Random Kitchen Sinks essentially removes the greedy step by choosing a random sets of functions thereby making the problem a linear problem. The solution becomes a simple least squares problem.

In my view, the problem with random forest is not that it is parallel ( a good thing) but that it is greedy.

If one would be interested in making random forests closer to a non greedy approach, then using this image of Random Forests being a collection of Randomized Adaboosts might help. How ?

Imagine that each traditional tree in the Random Forest algorithm is replaced by a (large) set of different Random Branches. In effect, each branch ressembles a Random Forest tree but not only do we choose the elements at each node in a randomized fashion (as in the traditional Random Forests algorithm) but we also get to choose the threshold in a randomized fashion. As is the case of the AdaBoost/Random Kitchen Sinks dual approach, we now require many branches to describe well one single tree of the Random Forests algorithm (i.e. we replace one tree in the Random Forest algorithm by a large set of branches that look exactly like that one tree except that each branch has different set of random threshold at every node). Yes, the problem got much bigger as a result but there is no free lunch. This is the price we would need to pay to transform a nonlinear greedy solver into a linear problem ( as was the case when we went from AdaBoost to the Random Kitchen Sinks approach).

Is there something similar in the compressive sensing literature ?

There is actually. Recently, we mentioned the work on 1bit CS with random biases (One-bit compressive sensing with norm estimation by Karin Knudson, Rayan Saab, Rachel Ward) and there is an implementation right here that allows one to reconstruct elements from several such meausurements. Each node of the proposed randomized branch would constitute an instance of this measurement model.

More on that later.






Image Credit: NASA/JPL/Space Science Institute, Full-Res: W00088633.jpg

W00088633.jpg was taken on June 23, 2014 and received on Earth June 25, 2014. The camera was pointing toward SATURN at approximately 1,406,225 miles (2,263,099 kilometers) away, and the image was taken using the MT2 and CL2 filters.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Distributed Outlier Detection using Compressive Sensing





In this paper, we study the computation of statistics over data that is stored across a distributed set of nodes. In this setting, a data vector of size N is distributed into L nodes such that each node holds a partial data in such a way that the global data vector is the sum of the locally stored data vectors. The goal is that each node transmits information to some global aggregator node, which can then compute the desired statistical function. For this setting, there are well-known lower bounds that show that even for simple aggregation functions such as the max-value, the total amount of communication required is at least linear in the size of the data N in the worst-case.
In this paper, we show that these lower bounds can be beaten if we assume that the underlying data has sparsity properties. Specifically, we devise a new algorithm for the distributed outlier detection problem that is based on compressive-sensing and exploits the sparse structure of the underlying data. We show both empirically on real web-scale production data as well as theoretically that such use of compressive sensing-based techniques can result in substantial improvements for distributed computing problems. Specifically, we prove under some conjecture that the algorithm can succeed in recovering outliers using only O(sc · logN) constant size messages, where c is a small constant and s is the sparsity of the underlying data.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Impression Store: Compressive Sensing-based Storage for Big Data Analytics

Here is a related use of compressive sensing: simple storage for Ads. Overall, I am very surprised that none of the references include any of the sketching contribution to compressive sensing [1,2]. In retrospect, it was wise for us to invite Muthu and Sam at the last Paris Machine Learning Meetup [3] so that these techniques don't get re-discovered everytime.





For many big data analytics workloads, approximate results suffice. This begs the question, whether and howthe underlying system architecture can take advantage of such relaxations, thereby lifting constraints inherent intoday’s architectures. This position paper explores one of the possible directions. Impression Store is a distributed storage system with the abstraction of big data vectors. It aggregates updates internally and responds tothe retrieval of top-K high-value entries. With proper extension, Impression Store supports various aggregations,top-K queries, outlier and major mode detection. While restricted in scope, such queries represent a substantial and important portion of many production workloads. In return, the system has unparalleled scalability; anynode in the system can process any query, both readsand updates. The key technique we leverage is compressive sensing, a technique that substantially reduces theamount of active memory state, IO, and traffic volume needed to achieve such scalability



[1] Data Stream Algorithms Notes from a series of lectures by S. Muthu Muthukrishnan


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, June 26, 2014

About t'em Random Projections in Random Forests


If anything comes out of Kaggle competitions, it is that Random Forests [4,5,6], while not consistently giving the best scores, have pretty good results most of the times. 

Random Forest [4] typically is a technique of supervised learning in Machine Learning parlance.

Recently, some folks have made some sorts of connection with these techniques and compressive sensing or Johnson-Lindenstrauss. In one of the earliest case [1], while not in a random forest setting, the authors used the sparsity of the positive label within the larger label set to perform classification on the compressed labels and used L1 solvers to re-identify the actual labels. In [2], the authors adapted this technique to train random trees to randomized label features. In [3], the authors took a different approach to random forests by noting that random trees were a mapping from features into a new, very sparse, feature space. They use this node sparsity in the new feature space to compress trees and reduce the memory size of the entire forest. This is great, but what if randomization could be used to build compressed forests from the get go ? Not much has been done it seems on that front. Aside from the random picking of features in decision trees, Leo Breiman's original paper on Random Forest [4] contained another interesting approach reminiscent of random projections used in compressive sensing: 
"...5. Random Forests Using Linear Combinations of Inputs

At a given node, L variables are randomly selected and added together with coefficients that are uniform random numbers on [-1,1]. F linear combinations are generated, and then a search is made over these for the best split. This procedure is called Forest-RC...."

One note that for large datasets, more randomized measurements yield similar or better accuracy than plain vanilla random forest or Adaboost.


More on that later.



[3]  L1-based compression of random forest models , Joly, Arnaud; Schnitzler, François; Geurts, Pierre; Wehenkel, Louis or L1-based compression of random forest models , Joly, Arnaud; Schnitzler, François; Geurts, Pierre; Wehenkel, Louis
[5] Data Mining with Random Forests™, A Brief Overview to RandomForests, Dan Steinberg, Mikhail Golovnya, N. Scott Cardell

Wednesday, June 25, 2014

Randomized Comments: Compressive Sensing Ten Year Anniversary, A Comment; A Review, Beyond gaussians with AMP, Searching Nuit Blanche with two labels, The Golosino seminar



I asked Emmanuel Candes about the presentations, which for the first time, introduced some people to compressive sensing and its attendant phase transitions. Here is what he responded:

Igor,
If you go here: http://statweb.stanford.edu/~candes/OldTalks.html, you will see three talks with phase transition diagrams. The first two have taken place on May 3 and May 28, 2004. The third on July 14, 2004.
Thanks,
Emmanuel
Thanks Emmanuel ! so it's been ten years, uh...

Phil Schniter just sent me the following:

Hi Igor,
I saw the "Cal-AMP" paper and wanted to mention that the approach discussed there (where GAMP is used with a known linear transform but output channels that depend on additional unknown parameters) has already been proposed and applied successfully to a number of publications over the last 3-4 years. Here are just a few of the related papers:
One paper is http://www2.ece.ohio-state.edu/~schniter/pdf/jstsp11_ofdm.pdf, which considers joint channel-estimation and symbol decoding in AWGN-corrupted OFDM. If you look at the factor graph in Figure 4 of that paper, you can see that the goal is to jointly estimate the finite-alphabet symbols s_n and the channel coefficients x_n. The x_n affect the outputs y_n through a dense linear transform (as is standard for AMP), but s_n acts as an unknown multiplicative gain on the n'th linear-transform output (as in Cal-AMP). This latter work goes beyond Cal-AMP by considered by considering the case of non-independent s_n. (See also the related papers and matlab code at http://www2.ece.ohio-state.edu/~schniter/turboGAMPdecoding/index.html)
Another paper is http://www2.ece.ohio-state.edu/~schniter/pdf/tsp14_impulse.pdf, which generalizes the above OFDM work from AWGN to AWGN-and-impulse-corruption, using Gaussian mixture models for the impulses. In other words, it uses an output channel of the form p(y_n|z_n,s_n,i_n), where vector z is a linear transform of unknown vector x, where s_n is an unknown multiplicative gain on the transform output, and where i_n is an additional additive interference beyond the AWGN. Moreover, the i_n can come in bursts across index n that is controlled by a Markov chain, and s_n can be probabilistically coupled through an error-control-code.
Another paper is http://arxiv.org/pdf/1401.0872v1.pdf, where the outputs are binary. Section IV.D of that work considers the case where an unknown subset of outputs are corrupted (i.e., their bits are flipped).
Thanks,
Phil
--
Phil Schniter
Professor
The Ohio State University
Department of Electrical and Computer Engineering
616 Dreese Lab, 2015 Neil Ave, Columbus, OH 43210
http://www.ece.osu.edu/~schniter

Thanks Phil  !



Hi Igor,
really glad you enjoyed the open review process. I don't know if you've seen yet, but the Publons has just uploaded all of our reviews, providing further discoverability, transparency and credit for all the hard work you guys have done. This is your review here: https://publons.com/review/3878/. Thanks so much again for your help with this!
Thanks Scott !


On the SubReddit group, I was recently asked about whether an implementation could do well on images. Underneath this question is the reality that while AMP based reconstruction solvers do well in specific cases (gaussian measurement matrices), they just do not deliver well on most other accounts as summarized very elegantly in this presentation slide by Sundeep Rangan (excerpted from Approximate Message Passing: Can it Work? ).




I have no clue beforehand if an AMP solver will work with a specific measurement matrices or with a certain types of data (compressible as opposed to strictly sparse ones). Here is what I know: when implementations are made available here, aside from the authors, you might be the first person on Earth to try it out on your own case of interest. It could be a certain type of images or some uncertainty quantification examples. There are no ropes, no safety lines, you are on your own when it comes to discover the real capabiities of the algorithm that has been submitted by the author. If it doesn't work well on your dataset, I am sure the authors would want to hear from your exploration of that phase space. As a reminder, here is a recent sample of implementations that aim at improving the AMP algorithm's convergence with a wider variety of measurement matrices: 

In other news: The Golosino seminar organized by Florent Krzakala and Lenka Zdeborova.had a slew of interesting presentations recently:


You can search through this blog with more than one label. In effect to search for blog entries with an implementation with a compressive sensing theme or an implementation dealing with matrix factorization, you need to query either:

and

respectively. You could add a third label -Got this tip from this blog- Here are some of the labels I use often:


Finally, my name is Igor Carron, not Nuit Blanche, not Igor Caron (or more recently here) nor Igor Carren. No biggies but I'd thought you'd want to know.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Cal-AMP : Blind Sensor Calibration using Approximate Message Passing


From the paper: 


One issue that can arise in CS is a lack of knowledge or an uncertainty on the exact measurement process. A known example is dictionary learning, where the measurement matrix F is not known. The dictionary learning problem can also be solved with an AMP-based algorithm if the number P of available signal samples grows as N [13].
A different kind of uncertainty is when the linear transformation F, corresponding to the mixing process, is known, but the sensing process is only known up to a set of parameters. In that case, it is necessary to estimate these parameters accurately in order for exact signal reconstruction to be possible. In some cases, it might be possible to estimate these parameters prior to the measurements in a supervised sensor calibration process, during which one measures the outputs produced by known training signals, and in this way estimate the parameters for each of the sensors. In other cases, this might not be possible or practical - think in particular of a setting in which these parameters can change over time, which would make a calibration step necessary prior to each measure. This is known as the blind sensor calibration problem, as the input signals are not known, and is it schematically shown on Fig. 2.

What you see is unsupervised learning of the transfer function between signals with particular properties and recorded data. If you would not know better, you'd think we were talking about a machine learning task. Let us note the use of sharp phase transitions to evaluate the capability of the algorithm to do the work. This is a line of investigation that ought to be looked into from the Machine Learning side if you ask me ( see Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ? , Sunday Morning Insight: Randomization is not a dirty word) but then nobody asks me :-)



The ubiquity of approximately sparse data has led a variety of com- munities to great interest in compressed sensing algorithms. Although these are very successful and well understood for linear measurements with additive noise, applying them on real data can be problematic if imperfect sensing devices introduce deviations from this ideal signal ac- quisition process, caused by sensor decalibration or failure. We propose a message passing algorithm called calibration approximate message passing (Cal-AMP) that can treat a variety of such sensor-induced imperfections. In addition to deriving the general form of the algorithm, we numerically investigate two particular settings. In the first, a fraction of the sensors is faulty, giving readings unrelated to the signal. In the second, sensors are decalibrated and each one introduces a different multiplicative gain to the measures. Cal-AMP shares the scalability of approximate message passing, allowing to treat big sized instances of these problems, and ex- perimentally exhibits a phase transition between domains of success and failure.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, June 24, 2014

Enhancing Pure-Pixel Identification Performance via Preconditioning - implementation -

As you probably recall, NMF has really found the beginning of a theoretical understanding only recently and that understanding hinges on the ability to find "pure pixels" in hyperspectral parlance: i.e. pixels that have only one elemental signature. Here is a way to find those pure pixels :



In this paper, we analyze different preconditionings designed to enhance robustness of pure-pixel search algorithms, which are used for blind hyperspectral unmixing and which are equivalent to near-separable nonnegative matrix factorization algorithms. Our analysis focuses on the successive projection algorithm (SPA), a simple, efficient and provably robust algorithm in the pure-pixel algorithm class. Recently, a provably robust preconditioning was proposed by Gillis and Vavasis (arXiv:1310.2273) which requires the resolution of a semidefinite program (SDP) to find a data points-enclosing minimum volume ellipsoid. Since solving the SDP in high precisions can be time consuming, we generalize the robustness analysis to approximate solutions of the SDP, that is, solutions whose objective function values are some multiplicative factors away from the optimal value. It is shown that a high accuracy solution is not crucial for robustness, which paves the way for faster preconditionings (e.g., based on first-order optimization methods). This first contribution also allows us to provide a robustness analysis for two other preconditionings. The first one is pre-whitening, which can be interpreted as an optimal solution of the same SDP with additional constraints. We analyze robustness of pre-whitening which allows us to characterize situations in which it performs competitively with the SDP-based preconditioning. The second one is based on SPA itself and can be interpreted as an optimal solution of a relaxation of the SDP. It is extremely fast while competing with the SDP-based preconditioning on several synthetic data sets.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

From Denoising to Compressed Sensing

 Rich just sent me the following:


hi igor -

we have been working on integrating advanced denoising algorithms into compressive sensing recovery algorithms and have found that approximate message passing (AMP) provides a flexible platform that can support a variety of denoisers since the Onsager correction Gaussianizes the error at each iteration. for image recovery, coupling AMP with the BM3D denoiser and the correct Onsager correction offers state-of-the-art performance. our preprint has a range of comparisons with other approaches and also an in depth analysis of the approach. i thought it might be interesting to your audience. thanks!


richb
Richard G. Baraniuk
Victor E. Cameron Professor of Electrical and Computer Engineering
Founder and Director, Connexions and OpenStax College
Rice University 

Sure Rich.





A denoising algorithm seeks to remove perturbations or errors from a signal. The last three decades have seen extensive research devoted to this arena, and as a result, today's denoisers are highly optimized algorithms that effectively remove large amounts of additive white Gaussian noise. A compressive sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, in this paper, we develop a denoising-based approximate message passing (D-AMP) algorithm that is capable of high-performance reconstruction. We demonstrate that, for an appropriate choice of denoiser, D-AMP offers state-of-the-art CS recovery performance for natural images. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A critical insight in our approach is the use of an appropriate Onsager correction term in the D-AMP iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.
( Update: Implementation is here )
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

NLCS : Non-Local Compressive Sampling Recovery - implementation -







Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this paper, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.


The attendant code is on Xianbiao Shu's page.

Also on Xianbiao Shu's page, here earlier implementations: 
  • Xianbiao Shu, Narendra Ahuja. "Imaging via Three-dimensional Compressive Sampling (3DCS) ". Proc. of International Conference on Computer Vision (ICCV), 2011,(pdf, code).
  • Xianbiao Shu, Narendra Ahuja. "Hybrid Compressive Sampling via a New Total Variation TVL1". Proc. of European Conference on Computer Vision (ECCV), 2010,(pdf, code).


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, June 23, 2014

ROSL : Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices - implementation -

Xianbiao Shu just sent me the following:

Hi Igor

I am Xianbiao, currectly working at Qualcomm after graduation from University of Illinois at Urbana-Champaign (UIUC).
Recently, I have two interesting papers:
(1) "Non-Local Compressive Sampling Recovery": which can significantly reduces the sampling rate by using non-local patch correlation.
(2) "Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices": which can recover the low-rank matrices at linear complexity.

Would you like do me a favor by posting them on Nuit Blanche?


The detailed paper information and code are shared on 
Thanks a lot.
Best regards,

Sure Xianbiao !, Here is the first one:



Low-rank matrix recovery from a corrupted observation has many applications in computer vision. Conventional methods address this problem by iterating between nuclear norm minimization and sparsity minimization. However, iterative nuclear norm minimization is computationally prohibitive for large-scale data (e.g., video) analysis. In this paper, we propose a Robust Orthogonal Subspace Learning (ROSL) method to achieve efficient low-rank recovery. Our intuition is a novel rank measure on the low-rank matrix that imposes the group sparsity of its coefficients under orthonormal subspace. We present an efficient sparse coding algorithm to minimize this rank measure and recover the low-rank matrix at quadratic complexity of the matrix size. We give theoretical proof to validate that this rank measure is lower bounded by nuclear norm and it has the same global minimum as the latter. To further accelerate ROSL to linear complexity, we also describe a faster version (ROSL+) empowered by random sampling. Our extensive experiments demonstrate that both ROSL and ROSL+ provide superior efficiency against the state-of-the-art methods at the same level of recovery accuracy.
The attendant code is on Xianbiao Shu's page.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.