NIPS 2010 - Transfer learning workshop
Ok this is probably my last post about NIPS 2010.
First of all, I became a big fan of Zoubin Ghahramani. He is a great speaker and quite funny. There are quite some video lecture by him that are linked on his personal page: here and here. They are mostly about graphical models and nonparametric methods.
He had an invited talk at the transfer learning workshop about cascading indian buffet process where he illustrated the idea behind this method:
"Every dish is a customer in another restaurant. Somebody pointed out that this is kind of canabilistic. We didn't realize that the IBP analogy goes really deep.... dark ... and wrong."
This work is about learning the structure of directed graphical models using IBP priors on the graph structure (pdf).
When asked about three way interaction, which this model does not feature - in contrast to many deep graphical models studied at the moment - he argued that latent variables induce covariances by marginalization on the layer below, so there is no need to model these relations explicitly. Take that, Geoff Hinton ;)
At the same workshop, there was also a talk by Antonio Torralba.
He talked a little bit about his experience with Amazons Electrical Turk. See also the deneme blog. For Torralba, ETurk did not seem to promising after trying to use it to label LabelMe. So instead, he created a different dataset that was annotated by a single labeler and so is much more consistently labeled.
It is called "Sun Database" and comes with segmentations and many classes. As far as I can see, it is not available yet but will be published soon.
One thing that Torralba talked about was that some classes are a lot more common than others. He claimed that it is natural that classes are distributed according to power law and that one needs to use knowledge about frequent objects to learn about rare objects (that was actually the title of his talk).
He showed some work on this by learning a tree of classifiers with weights that are partially shared between super classes. This work together with Ruslan Salakhutdinov is yet unpublished but I am looking forward to it.
In the end of the workshop there was an interesting discussion about transfer learning and datasets and much more.
One thing that most of the speakers agreed upon was that the label task can be solved just by massive amount of data. But that does not mean that visual reasoning can be solved this way, since there are infinitely many possible questions about a certain scene beyond "is there a car in this picture".
Let me finish this series of long and boring posts about my NIPS experience by quoting from the end of this discussion:
Josh Tenenbaum: "There's just too much agreement here."
Zoubin Ghahramani: "Yeah I don't know what's wrong with me. I must be tired."
First of all, I became a big fan of Zoubin Ghahramani. He is a great speaker and quite funny. There are quite some video lecture by him that are linked on his personal page: here and here. They are mostly about graphical models and nonparametric methods.
He had an invited talk at the transfer learning workshop about cascading indian buffet process where he illustrated the idea behind this method:
"Every dish is a customer in another restaurant. Somebody pointed out that this is kind of canabilistic. We didn't realize that the IBP analogy goes really deep.... dark ... and wrong."
This work is about learning the structure of directed graphical models using IBP priors on the graph structure (pdf).
When asked about three way interaction, which this model does not feature - in contrast to many deep graphical models studied at the moment - he argued that latent variables induce covariances by marginalization on the layer below, so there is no need to model these relations explicitly. Take that, Geoff Hinton ;)
At the same workshop, there was also a talk by Antonio Torralba.
He talked a little bit about his experience with Amazons Electrical Turk. See also the deneme blog. For Torralba, ETurk did not seem to promising after trying to use it to label LabelMe. So instead, he created a different dataset that was annotated by a single labeler and so is much more consistently labeled.
It is called "Sun Database" and comes with segmentations and many classes. As far as I can see, it is not available yet but will be published soon.
One thing that Torralba talked about was that some classes are a lot more common than others. He claimed that it is natural that classes are distributed according to power law and that one needs to use knowledge about frequent objects to learn about rare objects (that was actually the title of his talk).
He showed some work on this by learning a tree of classifiers with weights that are partially shared between super classes. This work together with Ruslan Salakhutdinov is yet unpublished but I am looking forward to it.
In the end of the workshop there was an interesting discussion about transfer learning and datasets and much more.
One thing that most of the speakers agreed upon was that the label task can be solved just by massive amount of data. But that does not mean that visual reasoning can be solved this way, since there are infinitely many possible questions about a certain scene beyond "is there a car in this picture".
Let me finish this series of long and boring posts about my NIPS experience by quoting from the end of this discussion:
Josh Tenenbaum: "There's just too much agreement here."
Zoubin Ghahramani: "Yeah I don't know what's wrong with me. I must be tired."
Comments
Post a Comment