Evaluating Feature Encoding Methods

While browsing Andrea Vedaldi's publications I came across some joint work with K. Chatfield, V. Lempitsky and A. Zisserman, called The devil is in the details: an evaluation of recent feature encoding methods.
It reviews some recent methods of visual feature encoding.
Since the introduction of bags of visual words, many people tried to improve this now-standard method.
Chatfield and his colleagues provide a systematic comparison of some of the most prominent directions:
  • Local linear coding
  • Fisher vectors (see also my last post)
  • Soft thresholding / kernel based methods
  • Super vector coding
Different codebook sizes, sampling of sift features an additive kernels were explored.

I feel Fisher vectors hold up quite nicely in their comparison, though the other methods yield similar results. Apparently the super vector coding scheme that was used in this work scaled badly and could not be applied to the Caltech 101 dataset.



There is matlab/C code accompanying the paper that implements all of the methods. I feel this is a great contribution.


This paper is definitely worth reading if you want to get a better understanding of recent feature coding schemes in computer vision.

Comments

Popular posts from this blog

Machine Learning Cheat Sheet (for scikit-learn)

A Wordcloud in Python

MNIST for ever....

Python things you never need: Empty lambda functions