## Page Views on Nuit Blanche since July 2010

the LinkedIn Compressive Sensing group (3264) or the Advanced Matrix Factorization Group (1017)

## Monday, April 27, 2015

### Compressed Sensing Petrov-Galerkin Approximations for Parametric PDEs

Responding to a question I had on Twitter about the following paper and its connection with uncertainty quantification, Jean-Luc responded with the following:

Dear Igor,

Sorry for the late reply regarding your comment on Twitter. I prefered to reply per email as I'm guessing I'm going to go over the character limitation :)

Our work is directly related to problems in uncertainty quantification. The reason why it's not really obvious in this small note is that we had a 5 page restriction (this is mandatory for SampTA 2015) and decided to focus on the sampling/approximation results.

Why does it relate to uncertainty quantification? Consider the case where the 'parameters' y are in fact outputs of a random field with a certain probability distribution (see http://www.math.tamu.edu/~rdevore/publications/139.pdf or any other publications from Cohen, Devore, Schwab for more details), then you can recast the problem from uncertainty quantification into a parametric approach (in a sense): take a PCA / KL decomposition of the random field, and you get a parametric representation. Hence, yes, our results do apply for uncertainty quantification, even though they are phrased from another point of view right now.

We have other manuscripts in preparation that should be done (at least in a preprint form) by late June - I'll let you know when this is the case, if you have any interest? I will try to write a bit more details regarding the relation to uncertainty quantification and the relevance of our work on this topic.

Let me know if you have any questions,

Best,
Jean-Luc

...I forgot to mention, it might be interesting to link to the following papers:
Starting papers:
* Cohen, Devore, Schwab, Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs: http://www.math.tamu.edu/~rdevore/publications/143.pdf
* Cohen, Devore, Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs: http://www.math.tamu.edu/~rdevore/publications/139.pdf
The previous two papers describe the general ideas and first results behind the compressibility of the polynomial chaos expansions of the solution map.

* Cohen, Chkifa, Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs: http://e-collection.library.ethz.ch/eserv/eth:47390/eth-47390-01.pdf

Compressed sensing Petrov-Galerkin:
* Doostan et al, A non-adaptive sparse approximation of pdes with stochastic inputs: http://ctr.stanford.edu/ResBriefs09/10_doostan.pdf (first numerical steps)
* Rauhut, Schwab, Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations: http://www.mathc.rwth-aachen.de/~rauhut/files/csparampde.pdf (theoretical analysis)
* Bouchot et al, Compressed sensing Petrov-Galerkin approximations for Parametric PDEs (direct application from the previous paper)

There has been quite a bit of research lately also on L2 minimization and projections on polynomial spaces. But I guess it gets a little out of the scope here. I'll send you pointers if you're interested.

Cheers,
JL

We consider the computation of parametric solution families of high-dimensional stochastic and parametric PDEs. We review recent theoretical results on sparsity of polynomial chaos expansions of parametric solutions, and on compressed sensing based collocation methods for their efficient numerical computation.
With high probability, these randomized approximations realize best N-term approximation rates afforded by solution sparsity and are free from the curse of dimensionality, both in terms of accuracy and number of samples evaluations (i.e. PDE solves). Through various examples we illustrate the performance of Compressed Sensing Petrov-Galerkin (CSPG) approximations of parametric PDEs, for the computation of (functionals of) solutions of intregral and differential operators on high-dimensional parameter spaces. The CSPG approximations reduce the number of PDE solves, as compared to Monte-Carlo methods, while being likewise nonintrusive, and being “embarassingly parallel”, unlike dimension-adaptive collocation or Galerkin methods.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, April 23, 2015

### AutoML Challenge: Python Notebooks for Round 1 and more...

In  the Sunday Morning Insight entry entitled The Hardest Challenges We Should be Unwilling to Postponed,  I mentioned a challenge set up by Isabelle Guyon entitled the AutoML challenge ( http://codalab.org/AutoML, her presentation is here). In short, the idea is to have a Kaggle like challenge that features several datasets of increasing difficulty and see how algorithm entries fare with these different datasets. Deep down, the algorithm needs to pay attention to its own running time and have a nice way of automatically select relevant features.

With Franck, we decided to use the mighty power of the large membership of the Paris Machine Learning meetup (Top 5 in the world) to help out in the setting up of a day long hackaton so that local teams could participate in the challenge. Currently round 1 of the challenge is over we are currently in the Tweakathon1 stage where you can submit codes that will eventually be run automatically on May 15 for AutoML2. From here:

Tweakathon1
Continue practicing on the same data (the phase 1 data are now available for download from the 'Get Data' page). In preparation for phase 2, submit code capable of producing predictions on both VALIDATION AND TEST DATA. The leaderboard shows scores on phase 1 validation data only.

AutoML2

Start: May 15, 2015, 11:59 p.m.

Description: INTERMEDIATE phase on multiclass classification problems. Blind test of the code on NEW DATA: There is NO NEW SUBMISSION. The last code submitted in phase 1 is run automatically on the new phase 2 datasets. [+] Prize winning phase.

Tweakathon2
Start: May 16, 2015, 11:59 p.m.

Description: Continue practicing on the same data (the data are now available for download from the 'Get Data' page). In preparation for phase 3, submit code capable of producing predictions on both VALIDATION AND TEST DATA. The leaderboard shows scores on phase 2 validation data only.

Here are some of the presentations made during the hackaton and some of the attendant python notebooks released for tweakaton 1:

The page for the hackaton is here. A big thank you to Pierre Roussel for hosting us at ESPCI ParisTech and to the coaches

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared

George sent me the following a few days ago:

Dear Igor,

My name is Huaijin (George) Chen, a PhD student at Rice working with Ashok Veeraraghavan. We recently got our paper "FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared" accepted at CVPR (arXiv pre-print attached). We were able to achieve 1 megapixel video at 32 fps using 64x64 focal plane sensor in our system. We were wondering, if it is possible, you could kindly mention it on your renowned Nuit Blanche website? Thank you so much!

http://arxiv.org/abs/1504.04085

Sincerely,

-George
Sure George, it is important to notice when CS Imaging hardware implementations go beyond the traditional plain vanilla single pixel camera concept. Without further ado. FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared by Huaijin Chen, M. Salman Asif, Aswin C. Sankaranarayanan, Ashok Veeraraghavan

Cameras for imaging in short and mid-wave infrared spectra are significantly more expensive than their counterparts in visible imaging. As a result, high-resolution imaging in those spectrum remains beyond the reach of most consumers. Over the last decade, compressive sensing (CS) has emerged as a potential means to realize inexpensive short-wave infrared cameras. One approach for doing this is the single-pixel camera (SPC) where a single detector acquires coded measurements of a high-resolution image. A computational reconstruction algorithm is then used to recover the image from these coded measurements. Unfortunately, the measurement rate of a SPC is insufficient to enable imaging at high spatial and temporal resolutions.

We present a focal plane array-based compressive sensing (FPA-CS) architecture that achieves high spatial and temporal resolutions. The idea is to use an array of SPCs that sense in parallel to increase the measurement rate, and consequently, the achievable spatio-temporal resolution of the camera. We develop a proof-of-concept prototype in the short-wave infrared using a sensor with 64$\times$ 64 pixels; the prototype provides a 4096$\times$ increase in the measurement rate compared to the SPC and achieves a megapixel resolution at video rate using CS techniques.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, April 22, 2015

### CSJob: Post-Doc: Learning Representations for Large Scale Multivariate Data

Jerome Bobin sent me the following opportunity the other day:

POST-DOC : LEARNING REPRESENTATIONS FOR LARGE-SCALE MULTIVARIATE DATA.

The concept of sparsity and sparse signal representations has led to the development of very efficient analysis methods in imaging science. Most state-of-the-art solutions to classical inverse problems in imaging are grounded on sparsity: denoising, deconvolution, inpainting, blind source separation, etc [SMF10]. Fixed or analytic signal representations, such as the celebrated wavelet transforms, curvelets frames, bandlets, to name only a few [SMF10], allow to compressively encode the inner geometrical structures of generic signals from a few basis elements or atoms. Since compressibility or sparsity is the key principle, dictionary learning techniques [AEB06,RPE13] have more recently been introduced to provide data-driven and therefore more efficient sparse signal representations.

The appeal of dictionary learning techniques lies in their ability to capture a very wide range of signal/image content or morphologies, which make it the perfect analysis tool for analyzing complex real-world datasets. However, these methods have seldom been extended to learn sparse representations of multivariate data such as multi/hyperspectral data, which play a prominent role in scientific fields as different as remote sensing, biomedical imaging or astrophysics. Studying extensions of dictionary learning techniques to derive sparse representations that are specifically tailored for multispectral data is therefore fundamental in imaging science. In this context, the goal of this research project is:

• Extend dictionary learning techniques to analyze multi/hyperspectral data. We will particularly focus on studying dedicated learning strategies to extract sparse multivariate representations.

• Apply and evaluate the proposed representations for solving key inverse problems in multispectral imaging such as missing data interpolation (inpainting), reconstruction from incomplete and incoherent measurements (compressed sensing), etc.

• A particular attention will be paid to the design of learning procedures that can perform in the large-scale setting. This implies that the project will include investigating computationally efficient learning/solving algorithms, with a specific focus on modern-day methods grounded upon non-smooth convex optimization.

These developments will be applied to analyze real-world datasets in astrophysics, which can include the Planck data 1

Any candidate must have a PhD and have a strong background in image/signal processing, especially in sparse signal processing. A good knowledge of convex optimization is a plus.

• Contact: jbobin@cea.fr or florent.sureau@cea.fr
• Laboratory: CEA/IRFU/Cosmostat in Saclay http://www.cosmostat.org
• Financing: European project DEDALE http://dedale.cosmostat.org
• Duration: 3 years : 2015-2018
• Applications are expected prior to May, 31st 2015, the Fermi/LAT data2

[AEB06] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. ITSP, 54(11), 2006. 4311–4322.

[RPE13] Ron Rubinstein, Tomer Peleg, and Michael Elad. Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model. IEEE Transactions on Signal Processing, 61(3):661–677, 2013.
[SMF10] J.-L. Starck, F. Murtagh, and M.J. Fadili. Sparse Image and Signal Processing. Cambridge University Press,

2010.

1 http://sci.esa.int/planck/

2 http://fermi.gsfc.nasa.gov

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, April 21, 2015

### Thesis: Empirical-Bayes Approaches to Recovery of Structured Spar se Signals via Approximate Message Passing, Jeremy Vila

In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to ever-increasing consumer demand coupled with relatively inexpensive sensing technologies. These high-dimensional datasets bring challenges such as complexity, along with numerous opportunities. Though many signals of interest live in a high-dimensional ambient space, they often have a much smaller inherent dimensionality which, if leveraged, lead to improved recoveries. For example, the notion of sparsity is a requisite in the compressive sensing (CS) field, which allows for accurate signal reconstruction from sub-Nyquist sampled measurements given certain conditions.When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution is apriori known, then one could use computationally efficient approximate message passing (AMP) techniques that yield approximate minimum MSE (MMSE) estimates or critical points to the maximum a posteriori (MAP) estimation problem. In practice, though, the distribution is unknown, motivating the use of robust, convex algorithms such as LASSO–which is nearly minimax optimal–at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, this dissertation focuses on empirical-Bayesian techniques that simultaneously learn the underlying signal distribution using the expectation-maximization (EM) algorithm while recovering the signal. These techniques are well-justified in the high-dimensional setting since, in the large system limit under specific problem conditions, the MMSE version ofAMP’s posteriors converge to the true posteriors and a generalization of the resulting EM procedure yields consistent parameter estimates. Furthermore, in many practical applications, we can exploit additional signal structure beyond simple sparsity for improved MSE. In this dissertation, we investigate signals that are non-negative, obey linear equality constraints, and exhibit amplitude correlation/structured sparsity across its elements. To perform statistical inference on these structured signals, we first demonstrate how to incorporate these structures into our Bayesian model, then employ a technique called “turbo” approximate message passing on the underlying factor graph. Specifically, we partition the factor graph into the Markov and generalized linear model subgraphs, the latter of which can be efficiently implemented using approximate message passing methods, and combine the subgraphs using a “turbo” message passing approach. Numerical experiments on the compressive sensing and hyperspectral unmixing applications confirm the state-of-the-art performance of our approach, in both reconstruction error
and runtime, on both synthetic and real-world datasets.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Streaming: Memory Limited Matrix Completion with Noise, Verifiable Stream Computation, Count-Min-Log sketch

Streaming is bound to take a centerstage role if we are talking about Big Data...or Data as we call it in Texas. Today, we have an algorithm for matrix completion, a paper on verifying the properties of certain streams and some approximation of count-min sketch. Without further ado:

Streaming, Memory Limited Matrix Completion with Noise by Se-Young Yun, Marc Lelarge, Alexandre Proutiere

In this paper, we consider the streaming memory-limited matrix completion problem when the observed entries are noisy versions of a small random fraction of the original entries. We are interested in scenarios where the matrix size is very large so the matrix is very hard to store and manipulate. Here, columns of the observed matrix are presented sequentially and the goal is to complete the missing entries after one pass on the data with limited memory space and limited computational complexity. We propose a streaming algorithm which produces an estimate of the original matrix with a vanishing mean square error, uses memory space scaling linearly with the ambient dimension of the matrix, i.e. the memory required to store the output alone, and spends computations as much as the number of non-zero entries of the input matrix.
Verifiable Stream Computation and Arthur–Merlin Communication  by Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, Suresh Venkatasubramanian
In the setting of streaming interactive proofs (SIPs), a client (verifier) needs to compute a given function on a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a third party service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service cannot simply supply the desired answer; it must convince the verifier of its correctness via a short interaction after the stream has been seen.

In this work we study "barely interactive" SIPs. Specifically, we show that two or three rounds of interaction suffice to solve several query problems --- including Index, Median, Nearest Neighbor Search, Pattern Matching, and Range Counting --- with polylogarithmic space and communication costs. Such efficiency with O(1) rounds of interaction was thought to be impossible based on previous work.

On the other hand, we initiate a formal study of the limitations of constant-round SIPs by introducing a new hierarchy of communication models called Online Interactive Proofs (OIPs). The online nature of these models is analogous to the streaming restriction placed upon the verifier in an SIP. We give upper and lower bounds that (1) characterize, up to quadratic blowups, every finite level of the OIP hierarchy in terms of other well-known communication complexity classes, (2) separate the first four levels of the hierarchy, and (3) reveal that the hierarchy collapses to the fourth level. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs, establishes limits on the power of existing techniques for developing constant-round SIPs, and provides a new characterization of (non-online) Arthur–Merlin communication in terms of an online model.

Comment: Some of the results in this paper appeared in an earlier technical report (http://eccc.hpi-web.de/report/2013/180/). That report has been subsumed by this manuscript and an upcoming manuscript by Thaler titled "Semi-Streaming Algorithms for Annotated Graphs Streams".

Count-Min-Log sketch: Approximately counting with approximate counters by Guillaume Pitel, Geoffroy Fouquier

Count-Min Sketch is a widely adopted algorithm for approximate event counting in large scale processing. However, the original version of the Count-Min-Sketch (CMS) suffers of some deficiences, especially if one is interested by the low-frequency items, such as in text-mining related tasks. Several variants of CMS have been proposed to compensate for the high relative error for low-frequency events, but the proposed solutions tend to correct the errors instead of preventing them. In this paper, we propose the Count-Min-Log sketch, which uses logarithm-based, approximate counters instead of linear counters to improve the average relative error of CMS at constant memory footprint.
Image Credit: NASA/JPL-Caltech/Space Science Institute

N00238257.jpg was taken on March 28, 2015 and received on Earth March 31, 2015. The camera was pointing toward IAPETUS, and the image was taken using the BL1 and GRN filters. This image has not been validated or calibrated.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, April 20, 2015

### SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax - implementation -

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax by Weijie Su, Emmanuel Candes
We consider high-dimensional sparse regression problems in which we observe $y = X \beta + z$, where $X$ is an $n \times p$ design matrix and $z$ is an $n$-dimensional vector of independent Gaussian errors, each with variance $\sigma^2$. Our focus is on the recently introduced SLOPE estimator (Bogdan et al., 2014), which regularizes the least-squares estimates with the rank-dependent penalty $\sum_{1 \le i \le p} \lambda_i |\hat \beta|_{(i)}$, where $|\hat \beta|_{(i)}$ is the $i$th largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of $X$ are i.i.d. $\mathcal{N}(0, 1/n)$, we show that SLOPE, with weights $\lambda_i$ just about equal to $\sigma \cdot \Phi^{-1}(1-iq/(2p))$ ($\Phi^{-1}(\alpha)$ is the $\alpha$th quantile of a standard normal and $q$ is a fixed number in $(0,1)$) achieves a squared error of estimation obeying $\sup_{\|\beta\|_0 \le k} \,\, \mathbb{P}\left(\| \hat\beta_{SLOPE} - \beta \|^2 > (1+\epsilon) \, 2\sigma^2 k \log(p/k) \right) \longrightarrow 0$ as the dimension $p$ increases to $\infty$, and where $\epsilon > 0$ is an arbitrary small constant. This holds under weak assumptions on the sparsity level $k$ and is sharp in the sense that this is the best possible error {\em any} estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of sparsity classes. We are not aware of any other estimator with this property.
Code and data are available from the paper's webpage.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients

Here is a mix of both Stochastic Gradient descent and Random Features to approximate Kernel PCAs and more...

Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients by Bo Xie, Yingyu Liang, Le Song

Nonlinear component analysis such as kernel Principle Component Analysis (KPCA) and kernel Canonical Correlation Analysis (KCCA) are widely used in machine learning, statistics and data analysis, and they serve as invaluable preprocessing tools for various purposes such as data exploration, dimension reduction and feature extraction.
However, existing algorithms for nonlinear component analysis cannot scale up to millions of data points due to prohibitive computation and memory requirements. There are some recent attempts to scale up kernel version of component analysis using random feature approximations. However, to obtain high quality solutions, the number of required random features can be the same order of magnitude as the number of data points, making such approach not directly applicable to the regime with millions of data points.
We propose a simple, computationally efficient, and memory friendly algorithm based on the "doubly stochastic gradients" to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA, SVD and latent variable model estimation. Despite the non-convex nature of these problems, we are able to provide theoretical guarantees that the algorithm converges at the rate $\tilde{O}(1/t)$ to the global optimum, even for the top $k$ eigen subspace. We demonstrate the effectiveness and scalability of our algorithm on large scale synthetic and real world datasets.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Saturday, April 18, 2015

### Saturday Morning Video: Towards a Learning Theory of Causation - Implementation -

Here is the video:

We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}ni=1, where each Si is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether "Xi→Yi" or "Xi←Yi". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.
The code is here.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Three biographies: Ken Case, Charles Johnson and Leo Breiman

Sometimes, it's always nice to get a context on certain things that happened in the past. Here are three biographies that I have read recently and which fits that bill.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.