In GraphLab workshop, Why should you care ? I listed a series of measurements matrices that are sparse as they may hold the key for faster reconstruction algorithms once you use Cloud computing. The underlying idea is that by being sparse, the solvers would use that property and the
GraphLab framework to sparsify the computations and optimally put them on several CPUs. With that in mind, I listed the following ensembles:
- The RIP(1) ensembles of Indyk et al
- The sparse Johnson-Lindenstrauss Transforms (see a recent example here),
- Achioloptas' Database friendly Random Projections
- The "magic" Matrices of Krzakala et al (see also their recent extension)
- The Light Chinese Design
- The LDPC deterministic construction of Dimakis et al [Thanks Phil Schniter]
- Binary Incoherent Matrices of Bailey, Iwen and Spencer (these matrices are sparse when multiplied with a discrete Fourier transform matrix)
Let us note that these families of matrices involve matrices with both types of elements: 0/1 and real numbers. On the numerical side, here the obvious pros for these measurement matrices:
- Be it an instance of GraphLab or as in algorithms such as Multiplicative update as currently investigated by Bob Sturm on his blog, it is always better if the reconstruction algorithm can take advantage of the sparsity of the measurement matrix. We always want faster solvers.
- Another numerical insight is given by the recent work of Andrew Waters, Aswin Sankaranarayanan, and Rich Baraniuk.( SpaRCS: Recovering Low-rank and Sparse matrices from Compressive Measurements ) whereby random projections are used to make amenable certain operations like PCA (in that case robust PCA). Sparse measurement matrices enable the rapid transformation of a potentially huge problem into a smaller and easily solvable one. This applies to many other randomized algorithms.
Sparse measurement matrices are indeed useful when it comes to numerical issues but the real question remains: Is there any other part of the process for which these matrices are useful ? How do they instantiate in real hardware and why would you map a sparse measurement matrix to a particular sensor or measurement process ?
Whenever you think of compressive sensing, on the acquisition part, you have to think about its main function: Multiplexing. Hence the reasoning for sparse measurement matrices resides in asking what are the pros and cons of sparse multiplexing, here is a list:
Cons:
- In the case of coded aperture or any DMD based system like the single pixel camera: Less luminosity is detected with each measurement yielding (Poisson) noise issues.
- Coefficients must generally be of 0 or 1 in order to have an immediate application (some families of sparse measurement matrices fulfill that condition however)
- If the multiplexing technology is not accurate enough, any (multiplicative noise) error on one coefficients will result immediately on larger errors.
Pros
- Storage of the measurement matrix is reduced thereby enabling instances of these mixing matrices on some embedded sensor without much design for accessing coefficients of the matrix,
- If the multiplexing technology is accurate enough, there are fewer errors possible on the coefficients (and therefore less multiplicative noise)
- Potentially less measurements
- Some families of sparse measurement matrices have 0/1 coefficients and therefore can have an immediate application
- Measurements are less likely to have clipping issues thereby enabling low dynamic range sensors
- By reducing the number of elements being multiplexed, one keeps the linearity of the function being investigated and keep automated mixing operation to a low count. Let us take the example of the Pooling design ([1][2]), fewer elements being mixed means a smaller likelihood of nonlinearity between reagents and fewer robotic mixing operations
References:
[1] QUAPO : Quantitative Analysis of Pooling in High-Throughput Drug Screening a presentation by Raghu Kainkaryam (a joint work with Anna Gilbert, Paul Shearer and Peter Woolf).
[2] poolMC : Smart pooling of mRNA samples in microarray experiments. (Supp. Material here) by Raghu Kainkaryam, Angela Bruex, Anna Gilbert, Peter Woolf and John Schiefelbein.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment