Friday, March 16, 2012

A small Q&A with Danny Bickson on GraphLab

Following up on the comments of yesterday's entry on the first GraphLab workshop, I summarized a question to Danny Bickson about GraphLab:

Question:

To summarize,.the reason GraphLab is interesting in any iterative algorithms is when, in the iteration stage, one can use a sparse matrix to provide an update for some intermediate step. That sparsity is handled and "mapped" into the CPU arrangement and there is no need to tinker with how these cpus talk to each other, right ?

Danny kindly responded with:
Graphlab supports many types of iterative algorithms the "edges" which are in your case the non zero entries of the sparse matrix define the correlation between variables. If the variables are not correlated it is ok to run them in few cpus in parallel
Ok so to extend on yesterday's entry (GraphLab workshop, why you should care), using the right kind of measurement matrix in compressive sensing problems, one can obviously include not just the BP algorithm but all the others including the greedy ones as well as  the iterated threshold ones or l_0 like solvers like SL0 and maybe even re-investigate the use of GPUs as, I think, the cloud is becoming quite competitive (see GraphLab instance on Amazon EC2) with the GPU solutions (unless one can dabble in a GPU cloud). Here is a sample of the solution I listed in the Compressive Sensing Hardware list (it needs updating I know)
For the robust PCA of say videos, I am sure that a similar approach could be undertaken by using the right kind of projection as in the case of SpaRCS: Recovering Low-rank and Sparse matrices from Compressive Measurements.

Eventually, while I don't see those, it would seem obvious to me that the giants in MRI and CT get to attend the workshop as it would seem clear to me that cloud computing is bound to disrupt some of the reconstruction business. we'll see.


Thanks Danny !

No comments:

Printfriendly