NIPS 2010 - Thinking dynamically

Apart from the presentations and posters, there is another great thing about NIPS: you can discuss machine learning with great researcher in person.

One of the people I talked to quite a lot is Jascha Sohl-Dickstein. We discussed some deep methods and training procedures at some length and he is an amazing person with a lot of energy and new ideas.
He recently wrote two papers that I quite liked:
Minimum Probability Flow Learning and An Unsupervised Algorithm For Learning Lie Group Transformations.
I like both of them for their quite unusual point of view. Jascha has a background in physics and his point of view focuses a lot on understanding the dynamics of learning and transformations.
It think "Minimum Probability Flow Learning" gives new insights into training probabilistic models and as far as I know it is used quite successfully for training Ising models.
Both works are not published yet but I find they are quite worth reading and so I'd like to draw a little more attention to them. He was actually quite surprised that I was so familiar with his work but I really like it a lot. Both papers have clear mathematical formulations of the underlying problems and feature beautiful formulations.
So if you are interested in nice math and probabilistic models, you should definitely read "Minimum Probability Flow Learning". If you are interested in videos and image transformations, "On Unsupervised Algorithm For Learning Lie Group" is for you.

Comments

  1. Minimum flow paper is interesting idea, but I wonder about the utility of learning the parameters of an intractable model....even sampling could be intractable, so why would you care about knowing the parameters of such model? (unless you are a physicist and the value of parameter is of interest in itself)

    ReplyDelete
  2. Are you talking about the Ising models? I think not only physicists are interested in those. Many people use them to learn about the structure of certain models. I think they are used in Biology quite a bit.
    And about intractable models: Depends on what you see as intractable. If you are talking about exact learning and inverence, isn't every interesting model intractable? RBMs are intractable and many people work on them. And every latent variable model is intractable in this sense. But they are used a lot nontheless, right?

    ReplyDelete

Post a Comment

Popular posts from this blog

Machine Learning Cheat Sheet (for scikit-learn)

A Wordcloud in Python

MNIST for ever....

Python things you never need: Empty lambda functions