Wednesday, April 20, 2011

CS: A Rosetta table between Statistics/Machine Learning and Compressed Sensing. (Part deux)


As I mentioned to Olivier Grisel, maybe we ought to split the first column into two, one for statistics and the other one for machine learning ?

2 comments:

Anonymous said...

I am not sure the machine learning community at large is really consistent on variable names, at least not as consistent as the stats people are. For instance parameters to be tuned are called weights and represented by a matrix W (in case there are many output to predict) while the statisticians doing linear regression will call one row of W beta.

Still, identifying common notations and comparing them across domains is useful for the new comers (esp. in CS where the notation seem rather unique).

Sohail said...

Hi Igor,

I've encountered some other expressions in stat/ML literature as well. (see e.g., http://en.wikipedia.org/wiki/Dependent_and_independent_variables#Alternative_terminology_in_statistics)

Other than the 'irrepressntability' condition, conditions similar/equivalent to RIP are Restricted Eigenvalue Property, Restricted Strong Convexity.
Also, sometimes they call the measurement matrix the "design (matrix)".

By the way I think n(or N) is not the number of observations, it's the number of features, assuming that we call the observed predictions, observations.

Finally, I'd like to mention that even though there are many analogies between CS and stat/ML framework there's a fundamental difference. In CS often the "signal" is often, if not always, independent of the "measurement matrix", but in stat/ML most of the time there's a conditional distribution which relates the weights/parameters vector, the design matrix, and the observations/labels.

Printfriendly