Yes I have tried simple linear classifiers (eg LDA or Least Square Regularization, Nearest neighbor, etc) and they do work well (pretty much as well as SVM, especially with more training data).....Let me bring one point though. I do not think the "random projection" aspect of our approach is what makes it work well. What is really key in the architecture is the ideas of invariance and hierarchy, both of which are absent from the random projection literature.They are absent for the moment, but time will come. For instance, the current work by Marco Duarte, Mark Davenport, Michael Wakin, Jason Laska, Dharmpal Takhar, Kevin Kelly, and Richard Baraniuk in Multiscale random projections for compressive classification provides a similar decomposition to the primary visual cortex at several scales. As Marco Duarte points out in the comment section:
And while traditionally shift invariance is provided by going in the Fourier domain as in Shift-invariant sparse coding for audio classification by Roger Grosse, Rajat Raina, Helen Kwong, and Andrew Ng, Thomas Serre's model of the primary visual cortex provides a similar robustness to shift/scale/rotation invariance through the use of a hierarchical model. The only hierarchical model I know of in Compressed Sensing is the one using the now infamous random filters as described by Joel Tropp, Michael Wakin, Marco Duarte, Dror Baron, and Richard Baraniuk in Random Filters For Compressive Sampling And Reconstruction.I have not seen any follow-up of that fascinating work but the results seem promising yet they remain in 1-D. In light of all this, it begs the question: Would hierarchical compressed sensing make sense ?
This is to say that the random measurements are obtained at different resolutions to get the projections of the different regularizations of the target image.
Photo credit: NASA JPL, 2007 WD5 will not hit Mars on Jan 30, it's a shame.
No comments:
Post a Comment