Saturday, July 30, 2011

CVML 2011 Posters

There were many posters on the CVML summer school and I won't talk about all of them.
Actually 8 of them got prizes (in form of hand-signed CV and ML books).
I knew some of the work form NIPS2010 but there were some things that were new to me:

Alexander Vezhnevets presented work on Multi Image Model for Semantic Segmentation with Different Levels of Supervision.
I don't know how I could miss that before. This is amazing work on weakly supervised semantic scene segmentation on MSRC. It makes use of CRFs, boosted texton forests and superpixels. The CRF does not only connect neighbouring superpixels but also superpixels in different images that look similar. Super pixel labels are treated as latent variables and only a very simple contraint between image label and superpixel label is enforced.

Since I am looking at a very similar task at the moment, even though the other posters were very good, this one was definitely the best for me.


Yang Hua presented work on Contextualizing Object Detection and Classification. This work was very successful in last years PASCAL challenge and was presented in (this years?) CVPR.
The basic idea is quite simple (but of course there is loads of features and tuning involved to make it win). Classifiers are trained per class and applied to images using sliding windows.
Then for each class, the confidences of the two best bounding boxes are used as an additional feature, giving 2x20 real numbers. On these, an additional classifier is trained and combined with the "direct" per class classifier.

Another idea that was included in this work was exclusiveness of labels. In the Pascal dataset, the classes have strong correlations. The authors argue that it is rather hard to use this correlations directly but instead construct exclusive label sets, which are sets of labels that never cooccur (like airplane and potted plant). This exclusiveness is then enforced in the predictions (though I am a little unsure about the details at the moment).

No comments:

Post a Comment