[CVML] Florent Perronnin on explicit feature maps
There is apparently a "new" trend in computer vision that I seem to have missed. It's explicit feature maps . So what is this about? We all know and love the kernel trick: have some vector representation X of your data and some algorithm (like SVMs) that rely only on dot products in the input space. Replace the dot products by some positive definite kernel and by Mercer's Theorem there is some Hilbert space of functions Y and some mapping phi: X -> Y such that your kernelized algorithm works as if you mapped your input to phi. This extremely powerfull method is now used everywhere, most of all for SVMs. But it has several significant drawbacks in this case: Training using a non-linear kernel is a lot slower [O(n^3)] than training a linear SVM [O(n)].
Kernel SVMs take a lot of memory to store, since they are "non-parametric" in a sense. In addition to the Lagrange-Multipliers alpha, all the support vectors have to be stored.
Recall is painfully...