This is fantastic: Here are papers trying to link Short Term Memory with either the RIP or the Donoho-Tanner phase transition so that we have actual limits on what constitute a good neural assembly. You probably recall Chris Rozell is one the lead behind one of the rapid solver construction (see Faster Than a Blink of an Eye. ) Being fast is one thing, but we also want to see how that approach translates into system size and so forth (a little bit like what the GWAS folks are trying to do with the sample size of the genome using compressive sensing ( see Application of compressed sensing to genome wide association studies and genomic selection, Predicting the Future: The Upcoming Stephanie Events). Anyway, here is Chris just sent me :
:
Hi Igor-
I know you've had a long-standing interest in the intersection of neuroscience with sparsity (and possibly compressed sensing). I wanted to draw your attention to some recent papers at this intersection.
First, you may find our recent paper in PLoS Computational Biology interesting: M. Zhu and C.J. Rozell. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLoS Computational Biology, 9(8):e1003191, August 2013.
http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1003191
While the classic Olshausen & Field (1996) paper showed that sparse coding can account for classical receptive fields shapes in primary visual cortex (V1), that paper didn't say anything about whether sparse coding can actually explain response properties observed in V1 neurons. In fact, classical receptive fields are not very good predictors of V1 neural responses to natural scenes (especially single-trial responses). The paper above performs simulated electrophysiology experiments to demonstrate that a wide variety of observed nonlinear/nonclassical response properties (single cell and population) are emergent behaviors of a sparse coding model implemented in a dynamical system. These results show that the sparse coding hypothesis, when coupled with a biophysically plausible implementation, can provide a unified high-level functional interpretation to many response properties that have generally been viewed through distinct mechanistic or phenomenological models.
Second, it's possible you saw a preprint version of this, but you may be interested in our recent paper that just appeared in Neural Computation: A.S. Charles, H.L. Yap, and C.J. Rozell. Short term memory capacity in networks via the restricted isometry property. Neural Computation, 26(6):1198-1235, June 2014. http://arxiv.org/abs/1307.7970
An interesting open question is how brains can store sequence memories for lengths of time on the order of seconds, which is probably too short to be due to plasticity (changes in synaptic connections between cells). The conjecture is that this type of memory must be due to transient activity in recurrently connected networks. In addition to questions about biological memory, this question has also come up as part in trying to understand why reservoir computing strategies (i.e., untrained, randomly connected artificial neural networks such as Echo State Networks and Liquid State Machines) work so well in some situations. The conventional wisdom is that signal structure could be used by the network to dramatically increase memory capacity, but this was not supported by formal analysis. In the paper above, we use a compressed sensing style analysis to show conclusively that memory capacities for randomly connected networks can be much higher than what was previously known when the signal has sparse structure than can be exploited. From a technical perspective, this paper establishes RIP for a very structured random matrix that corresponds to the propagation of a signal through a networked system.
regards,Thanks Chris, this is outstanding. I will come back to this later.
chris rozell
Very relevant set of links:
- Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ?
- From Direct Imaging to Machine Learning ... a rapid panorama (JIONC 2014)
- Sunday Morning Insight: Randomization is not a dirty word
- Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ?
- Sunday Morning Insight: Exploring Further the Limits of Admissibility
- Sunday Morning Insight: The Map Makers
- Quick Panorama of Sensing from Direct Imaging to Machine Learning
- Faster Than a Blink of an Eye.
- Do Deep Nets Really Need to be Deep?
- The Summer of the Deeper Kernels
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.
No comments:
Post a Comment