Monday, March 17, 2014

Quantum Computing, P vs NP, Compressive Sensing 3.0 and dealing with subdiffraction limits

At the last Paris Machine Learning Meetup, I did a presentation on Advanced Matrix Factorizations, Machine Learning and all that as it was initially an add-on to another presentation given by somebody else on quantum computers. See, many people are interested in that subject (Quantum Computers, or QC) without knowing too much the underlying the reason why. The generic thought is that with quantum computing, one should be able to solve NP-hard problems.

In the presentation, I made the case that many of the algorithms we use either in Machine Learning or compressive sensing eventually rely on different kinds of matrix factorizations which are themselves already well attacked with P / polynomial algorithms and that while QC might be interesting, it might remain a niche research area for a long while because of a combination between the point I just made and Moore's law (last slide). After saying this, I came across the following blog entry by Ken Regan and Dick Lipton: Could We Have Felt Evidence For SDP ≠ P?

I particularly liked this part of the entry:

"....Aram’s Bridging Thoughts
Quoting Aram, with light editing: “One of the stronger reasons for {\mathsf{P \neq NP}} is the Bayesian one—it is easier to find algorithms than lower bounds, so our failure to find a subexponential-time algorithm for {\mathsf{3SAT}} speaks louder than our failure to find a super-linear lower bound for {\mathsf{3SAT}}. A related way of expressing this is that before {\mathsf{NP}}-completeness was understood, thousands of researchers in disparate fields were unwittingly all trying to put {\mathsf{3SAT}} into {\mathsf{P}}.

But a counter-argument is that all of those seemingly independent researchers would always come up with algorithms that relied on a few kinds of structure—a lot is covered by just two kinds:
  • Convexity/absence of local minima—cases where greedy algorithms work.
  • Tree structure—cases where dynamic programming, divide-and-conquer, and related ideas work.
This paucity of algorithmic variety can be viewed in (at least) two ways:
  1. Algorithms, like oil deposits, are a finite resource. We’ve found all the important ones, and {\mathsf{3SAT}} is always going to require exponential time.
  1. The paucity is a sociological phenomenon, due to the fact that all researchers have learned essentially similar math backgrounds.
On the latter, Terry Tao’s recent breakthrough on the Navier-Stokes equations is an example of how much the same ideas keep recirculating, and how much more quickly progress can be made by cross-applying ideas rather than coming up with radically new ones. Going from Erwin Schrödinger’s equation to Peter Shor’s quantum factoring algorithm is a 60-minute lecture, but it took over 60 years (and a change in perspective coming from the computer revolution) to discover. Our lack of algorithms reveals only our lack of creativity, and it is arrogant to posit fundamental limits to mathematics just because we can’t solve a problem. Either way, the central question isn’t so much about {\mathsf{NP}} but rather a “throwback” of a question now being asked about quantum computers:
Where does the power of deterministic algorithms come from?
A related question is the power of the Lasserre hierarchy. It has been shown to be effective for a large number of problems, but with a surprisingly small number of truly different techniques. I would love to know whether further work will increase or decrease the number of ways in which we know how to use it; that is, either by discovering new methods or by unifying apparently different methods...."
It is interesting to note that tree structures and greedy algorithms are central to much of the compressive sensing and sometimes ML algorithms. Are we lazy ?

One may want to re-read some of the blog entries of  Afonso Bandeira as featured in this entry a while back:
To put this context one might also want to read [1, 2].

In a different direction, Dustin Mixon just wrote about what he calls Compressed Sensing 3.0, but I think we already have a name for it: Blind Deconvolution / Calibration.

Finally, I just came across this description of CFA Architectures Review on Vladimir's blog with this picture: 


If you look at the second solution of micro-color splitters, it looks to me that people are trying to use multiplexing to go around the impeding sudiffraction limits [5] and that clearly points to compressive sensing solutions eventually.



References:

[1] Sunday Morning Insight: The Map Makers
[2]  Sunday Morning Insight: Watching P vs NP
[3] Slides: Paris Machine Learning Meetup presentations (Meetup #2 through #9)
[4] The Lasserre hierarchy in Approximation algorithms by Thomas Rothvoß

The Lasserre hierarchy is a systematic procedure to strengthen a relaxation for an optimization problem by adding additional variables and SDP constraints. In the last years this hierarchy moved into the focus of researchers in approximation algorithms as the obtain relaxations have provably nice properties. In particular on the t -th level, the relaxation can be solved in time nO(t ) and every constraint that one could derive fromlooking just at t variables is automatically satisfied. Additionally, it provides a vector embedding of events so that probabilities are expressable as inner products.
The goal of these lecture notes is to give short but rigorous proofs of all key properties of the Lasserre hierarchy. In the second part we will demonstrate how the Lasserre SDP can be applied to (mostly NP-hard) optimization problems such as KNAPSACK,MATCHING,MAXCUT (in general and in dense graphs), 3-COLORING and SETCOVER.

[5] Natural Multiplexing in very small CMOS architecture.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly