I get sometimes mixed up with this (french) comedian. For some people, it's uncanny, for others, there is no resemblance what so ever... none... aucune...nada...so I can really appreciate this whole face recognition conundrum. In some way I am lucky, the guy is well liked, so at best people feel like they nearly had an opportunity to talk to a funny guy, then reality sinks in... All this to say that when people are trained into thinking that a specific person fit some predefined face, it really hard to say whether somebody else is an "extension" of that first manifold or something else,which leads us to assumption1 discussed in Is face recognition really a Compressive Sensing problem? by Qinfeng Shi, Anders Eriksson, Anton van den Hengel, Chunhua Shen. The abstract reads:

Compressive Sensing has become one of the standard methods of face recognition within the literature. We show, however, that the sparsity assumption which underpins much of this work is not supported by the data. This lack of sparsity in the data means that compressive sensing approach cannot be guaranteed to recover the exact signal, and therefore that sparse approximations may not deliver the robustness or performance desired. In this vein we show that a simple `2 approach to the face recognition problem is not only signiﬁcantly more accurate than the state-of-theart approach, it is also more robust, and much faster. These results are demonstrated on the publicly available YaleB and AR face datasets but have implications for the application of Compressive Sensing more broadly.

In this work we have compared Compressive Sensing face recognition methods, such as [19] and [15], with standard `2 approaches. The conclusion we have drawn as a result is that there is no theoretical or empirical reason to expect that enforcing sparsity on the coefﬁcients of (2) will improve robustness. The experiments carried out here clearly demonstrate this. Not only does solving (4) lead to worse performance, it is also less robust and orders of magnitudes slower than least-squares type approaches. We do not propose a novel robust method for face recognition, but rather show that well know least-squares approaches out perform many of the existing more complicated algorithms. We also showed that if `1 minimisation is intended to improve the robustness of the method then this should be achieved by solving (6) as discussed in section 3. This may be computationally expensive, however, as it requires solving a linear program. Ways of efﬁciently solving (6) and an investigation in to the performance of such a formulation is the topic of future work.

I have several thoughts after reading this paper. One of them is that we ought to do a better job of making up databases on which we get to train our algorithms. The second one is that the l_1 regression problem(6) somehow looks a little bit like what ALPS is solving. Finally, if there is one thing to come out of the multiplicative noise study currently featured on this blog, it is that,

*under some noise level*, a least square solution is sensitive to sparsity.
## 5 comments:

I definitely see the resemblance!

I confirm :-P

But I swear, I am not that guy :-)

As I understand the problem authors have is poor understanding of robust estimators. What they are doing is not L2, but 1st iteration of trimmed mean estimator. With several iterations they could probably get even better results. Relationship of L1 and redescending robust estimators, of which trimmed mean is one of the cases is a very tricky subject. Usually redescending estimators faster and more stable than L1, but if it always so and what are breakdown point ratio - those seems to be open questions. IMHO robust estimators theory is woefully underdeveloped

I mean trimmed least squares of cause

Post a Comment