In the spirit of Reddit, Google+, LinkedIn as Peer Review Systems, here are various recent interactions of note (see update at the end of this entry). After publishing Sunday Morning Insight: Phase Transitions and Eigen-gaps as Roadmaps., I received the following from Lenka Zdeborova:

Dear Igor,At this late hour I made it to catch up with Nuit Blanche. I would like to point out that the result of the paper by Junan Zhu and Dror Baron were already included in our paper Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices, Section V.C, and we indeed showed that also in that case the seeded matrices serve to go close to optimality, Section VI.C. The noisy case is included in the Donoho, Javanmard, Montanari proof of optimality with seeding/spatial coupling. I actually just wrote about this to Junan and Dror before reading today's Nuit Blanche.Best!

Paul Shearer also provided some additional insight on Raj Rao's preprint on the Google + community forum.

Following up on last week's Sunday Morning Insight: Stripe Physics, Wavelets and Compressive Sensing Solvers, Phil Schniter wrote the following email:

Igor,

I want to clarify that there are some problems with the paragraph from [4] that you quoted below. In particular, the line

The AMP was generalized for general signal models in [5], [6] and called G-AMP. The algorithm used in [3], [7] is equivalent to G-AMP. is incorrect. The "G-AMP" algorithm, as introduced by Rangan in arXiv:1010.5141, is a considerable generalization of the "Bayesian AMP" algorithm proposed earlier by Donoho Maleki and Montanari at ITW 2010 (or [5] above). G-AMP is distinct from Bayesian AMP because G-AMP handles nonlinear observations such as those in logistic regression, phase-retrieval, and reconstruction from quantized samples. Also, G-AMP assumes less strict conditions on the homogeneity of the measurement matrix than Donoho Maleki and Montanari's algorithms. Donoho Maleki and Montanari deserve credit for the Bayesian AMP algorithm, while Rangan deserves credit for the G-AMP algorithm, and the two should not be confused.Thanks,

Phil

In a different direction, you may have noticed that the paper mentioned in That Netflix RMSE is way too low or is it ? ( Clustering-Based Matrix Factorization - implementation -) has now been removed from ArXiv by the author. We all learned something in the process. For more info check This Week's Guardians of Science: Zeno Gantner and Peyman Milanfar.

In the comment section of some recent blog entries, here are some items of note, you may be interested in continuing those discussions:

In Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing, Dick Gordon commented that:

In A Randomized Parallel Algorithm with Run Time $O(n^2)$ for Solving an $n \times n$ System of Linear Equations -, Antti Lipponen commented that:It might be time to combine the literature of adaptive neighborhoods with compressive sensing. See:Gordon, R. & R.M. Rangayyan (1984). Feature enhancement of film mammograms using fixed and adaptive neighborhoods. Applied Optics 23(4), 560-564.which kicked it off. Now about 200 papers, but none on compressive sensing.

Hi all. I implemented the Fliege's algorithm for real valued linear systems in C++ for Matlab (based on Fliege's paper and also took some ideas from Riccardo's and Benjamin's code). However, I cannot get the performance even close to Matlab's backslash operator (probably I am using too small matrices or my code is not optimal or...). If someone is interested in my code, it can be found at: http://pastebin.com/XpSaTqPQAll suggestions and comments are welcome!

In Linear Bandits in High Dimension and Recommendation Systems, Yash Deshpande commented:

Hi Igor,Just to clarify things a bit: the presentation deals with two essentially independent challenges in recommendation systems: privacy and interactivity. The paper you link deals strictly with the latter. As for the former perhaps the following would be more useful:The paper detailing the work on the 2011 CAMrA challenge is available here:Earlier work on matrix completion is detailed in papers with Raghu Keshavan. His thesis contains all the necessary references.

In Sudoku, Compressive Sensing and Thermodynamics, an anonymous commenter said:

In Tolerance to Ambiguity: Sparsity and the Bayesian Perspective, Mario Figueiredo commented:There's an earlier paper proposing this idea:

Hi,This is an interesting paper. However, I think that there is something important missing in this discussion: the loss function. Any (Bayesian) point estimate is the minimizer of the posterior expectation of some loss; in particular, the MAP is the minimizer of the posterior expectation of the 0/1 loss (in fact, it is the limit of a family of estimates, but that can be ignored in this discussion). Accordingly, there is no reason whatsoever why the distribution of MAP estimates has to be similar to the prior; why would it? Furthermore, for a MAP estimate to yield "correct results" (whatever "correct" means), there is no reason why typical samples from the prior should look like those "correct results". In fact, the compressed sensing (CS) example in Section 3.3 of the paper illustrates this quite clearly: (a) CS theory guarantees that solving (4) or (5) yields the "correct" solution; (b) as explained in section 3.1, (5) is the MAP estimate of x under a Laplacian prior (and the linear-Gaussian likelihood therein explained); (c) thus, the solution of (5) is the minimizer of the posterior expectation of the 0/1 loss under the likelihood and prior just mentioned; (e) in conclusion, if the underlying vectors are exactly sparse enough (obviously not typical samples of a Laplacian) they can be recovered by computing the MAP estimate under a Laplacian prior, that is, by computing the minimizer of the posterior expectation of the 0/1 loss. This is simply a fact. There is nothing surprising here: the message is that the prior is only half of the story and it doesn't make sense to look at a prior without looking also at the loss function. In (Bayesian) point estimation, a prior is "good", not if it describes well the underlying objects to be estimated, but if used (in combination with the likelihood function and observations) to obtain a minimizer of the posterior expectation of some loss it leads to "good" estimates.Regards,Mario Figueiredo.

I think the feedback system in place is already working pretty well....

Update 1:

Thomas Arildsen also mentioned the following in the comments of Reddit, Google+, LinkedIn as Peer Review Systems

Hi Igor,Thanks for your many suggestions and initiatives that I think can all help us a bit further in the right direction. Still, we really need some centralised place to organize this. Ideally, an open review and -access platform with the potential to replace a traditional journal.It would be great with a journal like PeerJ for signal processing & friends. I am keeping a close eye on the episciences project in hope for this to answer my prayers (episciences.org).An overlay to e.g. ArXiv would be a useful solution as well. For example Pierre Vandergheynst has gathered some constructive thoughts on this approach on Google+ recently.Maybe the recently appeared PubPeer site could fill this role (pubpeer.com)?

I also noticed PubPeer and it looks like a lightweight but potentially powerful solutioon.

**Image Credit:**NASA/JPL-Caltech

This image was taken by Navcam: Right A (NAV_RIGHT_A) onboard NASA's Mars rover Curiosity on Sol 196 (2013-02-23 12:30:02 UTC) .

Full Resolution

**Join the CompressiveSensing subreddit or the Google+ Community and post there !**

Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## No comments:

Post a Comment