Yesterday, I mentioned the following use of an Advanced Matrix Factorization for evaluating what humans do and don't understand in learning from this preprint Sparse Factor Analysis for Learning and Content Analytics written by the good folks at Rice ( Andrew S. Lan,Andrew E. Waters , Christoph Studer, Richard G. Baraniuk ). A commenter, njh, noted the following:
An interesting paper with an interesting pair of algorithms, but I question the validity of this assumption:"Our third observation is that the entries of W should be non-negative, since we postulate that having strong concept knowledge should never hurt a learner’s chances to answer questions correctly."I've noticed that many people learn an incorrect model which harms their learning of other related concepts. Dijkstra once wrote "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration". This might be a little harsh, but I've seen the same effect with students, and with my own learning. To test this, it might be interesting to see whether relaxing the non-negative constraint leads to more sparse solutions.(Or have I misunderstood something, making my understanding of this paper harder?)
and sometimes, sparse constraints yield non-negative outputs. But this is a valid question. To me the fascinating aspect of all this is that since these new crop of solvers are generally relaxations of NP-Hard problems, their attendant phase transitions might be delimiting our knowledge. More on that paper, this coming Sunday.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.