Posts

Showing posts from September, 2010

Pascal VOC 2010 programm is online.

This years Pascal VOC workshop is over and on its website , you can find papers by the winning entries. They are really worth reading.

Using vl_feat on 64-bit Linux with 32-bit Matlab

A lot of very good computer vision and machine learning libraries are written for use in Matlab. While Matlab has some advantages, I am more of a python man myself. I am a big fan of the vlfeat library by  Andrea Vedaldi and Brian Fulkerson. It is written for use in Matlab but there are some Python bindings provided by Mikael Rousson. Sadly they do not support all of vlfeat's great features. So today I wanted to make some more of vlfeat's functionality available in Python. For that, I first had to understand their Matlab interface. But when I tried to compile vlfeat I ran into some difficulties. The main problem is that the student version of Matlab is provided only in a 32bit version. But the Linux on my box is 64 bit. So here the journey begins. I am using Ubuntu but I guess the steps are quite similar for other distributions. First of all, you really have to convince mex to compile for 32bit. So in the Makefile under Linux-32 set MEX_FLAGS   ...

MNIST for ever....

Image
[update] This post is a bit old, but many people still seem interested. So just a short update: Nowadays I would use Python and scikit-learn to do this. Here is an example of how to do cross-validation for SVMs in scikit-learn. Scikit-learn even downloads MNIST for you. [/update] MNIST is, for better or worse, one of the standard benchmarks for machine learning and is also widely used in then neural networks community as a toy vision problem. Just for the unlikely case that anyone is not familiar with it: It is a dataset of handwritten digits, 0-9, in black on white background. It looks something like this: There are 60000 training and 10000 test images, each 28x28 gray scale. There are roughly the same number of examples of each category in the test and training datasets. I used it in some papers myself even though there are some reasons why it is a little weird. Some not-so-obvious (or maybe they are) facts are: - The images actually contain a 20x20 patch of digi...

ICANN: Netflix with Neural Networks and Restricted Boltzmann Machines

On Friday, Nichlosas Ampazis presented the second place submission on the netflix price challenge by the team the ensemble . Unfortunately I can not find the paper online. It evaluates combinations of autoencoders, Restricted Boltzmann Machines and self organizing maps for collaborative filtering. The approach is taken from Ruslan Salakhutdinvos paper Restricted Boltzmann Machines for Collaborative Filtering from ICML 2007. Just a quick reminder: The netflix challenge was a benchmark problem in collaborative filtering. The basic problem in collaborative filtering is, given some user ratings on some products, what would you expect people to think about products that they haven't rated. This is the problem most recommendation systems are based on, like for example the Amazon recommendations - and of course the netflix recommendations, too. The basic idea in using Restricted Boltzmann Machines to solve this problem is to learn the distribution of ratings. Products that have n...

ICANN: Learning RBMs is an Art

When I was on NIPS last year, I overheard Ruslan Salakhutdinov being asked whether training RBMs is more of an art than a science. I think this question is answered (at least for the moment) by a great paper by Asja Fischer and Christian Igel that will be presented tomorrow at the ICANN . They evaluate different training methods for RBMs on toy problems, where the partition function can be evaluated explicitly. What they find is that after an inital increase, the log-likelyhood of all models diverges. That is except the learn rate schedule or the weight decay parameter are choosen just right. Since it is impossible to evaluate the true log probability on a "real-world" dataset (see my older post ) this means that it seems impossible to know whether divergence occures and to choose the parameters accordingly. This paper evaluates CD, PCD and fast PCD but does not use parallel tampering (yet). It would be very interesting to see if parallel tampering might solve this...

Kernel Machines Blackboard

For a couple of days now I am working on a very imbalanced multi class problem, that I am trying to solve using SVMs. While I was looking into other peoples work, I found the forum at http://www.kernel-machines.org/ . It is very active and offers some advise on how to use SVMs in general but more particularly libsvm and its different interfaces. I have only been following the forum for a couple of days now but it seems very interesting and I hope I can motivate some more people to join in and make it even more active :)