You probably recall seeing in a recent long post the mention of this paper: The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising. in which the authors extended the sparse recovery setting problem to the low rank recovery setting. Well, Matan just let me know that the attendant PNAS paper is now here. But what is also interesting is the extent with which the authors wanted to share their codes and data. The RunMyCode page is here while the Code and Data repository at Stanford is here. Universities are know to be repository of documents but rarely of codes and data, good for them, I see Stabford is leading the way once again. Also of note, the use of RunMyCode an entity set up by Victoria Stodden (who is not unknown in our field). We all know now that sharing code and data is like compound interest. Without further due, here is the important paper and code you should not miss:

The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising by David L. Donoho, Matan Gavish, Andrea Montanari. The abstract reads:

Let $X_0$ be an unknown $M$ by $N$ matrix. In matrix recovery, one takes $n < MN$ linear measurements $y_1,..., y_n$ of $X_0$, where $y_i = \Tr(a_i^T X_0)$ and each $a_i$ is a $M$ by $N$ matrix. For measurement matrices with Gaussian i.i.d entries, it known that if $X_0$ is of low rank, it is recoverable from just a few measurements. A popular approach for matrix recovery is Nuclear Norm Minimization (NNM). Empirical work reveals a \emph{phase transition} curve, stated in terms of the undersampling fraction $\delta(n,M,N) = n/(MN)$, rank fraction $\rho=r/N$ and aspect ratio $\beta=M/N$. Specifically, a curve $\delta^* = \delta^*(\rho;\beta)$ exists such that, if $\delta > \delta^*(\rho;\beta)$, NNM typically succeeds, while if $\delta < \delta^*(\rho;\beta)$, it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, where an unknown $M$ by $N$ matrix $X_0$ is to be estimated based on direct noisy measurements $Y = X_0 + Z$, where the matrix $Z$ has iid Gaussian entries. It has been empirically observed that, if $X_0$ has low rank, it may be recovered quite accurately from the noisy measurement $Y$. A popular matrix denoising scheme solves the unconstrained optimization problem $\text{min} \| Y - X \|_F^2/2 + \lambda \|X\|_* $. When optimally tuned, this scheme achieves the asymptotic minimax MSE $\cM(\rho) = \lim_{N \goto \infty} \inf_\lambda \sup_{\rank(X) \leq \rho \cdot N} MSE(X,\hat{X}_\lambda)$. We report extensive experiments showing that the phase transition $\delta^*(\rho)$ in the first problem coincides with the minimax risk curve $\cM(\rho)$ in the second problem, for {\em any} rank fraction $0 < \rho < 1$.The conclusion of this important paper states:

For the problem of matrix recovery from Gaussian measurements, our experiments, as well as those of others, document the existence of a finite-N phase transition. We compared our measured empirical phase transition curve with a formula from the theory of matrix denoising and observed a compelling match. Although the matrix recovery and matrix denoising problems are superficially di fferent, this match evidences a deeper connection, such that mean squared error properties of a denoiser in a noise removal problem give precisely the exact recovery properties of a matrix recovery rule in a noiseless, but incomplete data problem. This connection suggests both new limits on what is possible in the matrix recovery problem, but also new ways of trying to reach those limits.

On an unrelated matter, one notes that:

- Victoria is a seeking an Associate Research Scientist for RunMyCode.org
- Andrea is looking for an RA "with experience in user interface design using HTML5 and JavaScript."

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment