[CVML] Quotes

This is my favourite part of every converence and workshop: quotes and fun facts :)

There are some things researchers will never write in a paper but that they really like to tell you.
Also many professors actually have a pretty good humor (or at least one that is as nerdy as mine).
Please not that even though I put the quotes into quotation marks, they might not be completely accurate.
Most lecturers can talk pretty fast and I am usually taking notes in the old-school paper way...


Ponce:
About learned dictionaries and filters: "Dictionary elements don't have semantic meaning. People like to look at them, I don't know why."
About denoising using structured sparsity: "We don't know anything about image processing. The finish guys are way better. But the sparse
model still works better."

Lambert:
About 1-vs-all training for multi class classification:
"Everyone is using that. But no one knows why it works."

Francis Bach (?):
About using L1 regularization as a prior for sparsity:
"Keep in mind that L1 regularization is a prior. If the prior is not true, then there is no hope."

Andrew Zisserman:
"The fully connected constellation model is doomed. DOOMED!!!"

Jitendra Malik:
On research in computer vision: "We work in a context where the signal to noise ratio is low.  This is the nature of science."
On publications using the MNIST dataset: "Working on MNIST is a waste of time. The is no
point in  trying to go from .4% error to .32%."
On gradient decent in convolutional neural networks: "This optimization is highly non-convex. But in the hand of a magician like Yann LeCun, it can work."
On trying to tweak parameters and features, use "clever" dictionaries and optimizations: "We can't go to the moon building larger and larger ladders." And we will have to live with the fact that new methods work less good in the beginning.
About finding "Gabor filter like" dictionaries: "I call this post-diction. We know that
V1 has Gabor filters so people are trying to find something that creates Garbor filters."

Vapnik (cited by Malik when introducing his work on poselets, refering to the use of skeletons to detect people): "When trying to solve a hard problem, don't try to solve an even harder problem as an intermediate step."

About weak/strong supervision: "People are trying to solve a hard problem with one hand on their back. I don't play this macho game!"

Leon Bottou:
About writing gradient descent code: "Do not forget to check your gradients.
Even if you are sure they are correct, they are certainly wrong!" Also: Check your gradients.
About optimization code: "Yann LeCun uses the same code since 1998. Not the same method,
the same _CODE_. When you see any of his demos, it's still the same code."
On convolutional neural networks: "There's something weired here. I don't know what.
I find it surprising that we can use (?) this things." (Not sure if "use" or "train")
When ask about large scale learning and the amount of private data on the internet:
"It's going to be interesting times. [...] There's going to be trouble."

Efros:
Talking about his non-parametric texture synthesis work relying only on data and how he reacted to the success of his method: "I thought: This works kinda well. I'm sorry about it. - That was 20 years ago. Now I want to revise that."

Ivan Laptev:
While demonstrating his action reconition system on the Hollywood dataset:
"There is some kissing action going on here .. it's nice to work with movies."
On mining movie scripts: "...  using a bag of REAL words ..." (as opposed to visual words)

Martial Hebert:
On structured prediction, showing a slide with only "f(x,y)" in the middel and nothing else: "This is my favourite slide. It should stay that way (but of course it doesn't)."

On Del Pero et. al. 2011 "Sampling Bedrooms" : "I'm only mentioning this paper because of the title. I'm very jealous of this paper title. I should write more papers with titles like this."

Comments

  1. Oh.. I like Malik's third quote: "On gradient decent in convolutional neural networks: "This optimization is highly non-convex. But in the hand of a magician like Yann LeCun, it can work."

    One has to become a magician in order to optimize highly non-convex functions.. I kinda agree ;)

    ReplyDelete
  2. I agree, too. But I feel it's less like doing actual magic than summoning some demon and hoping it will do you bidding rather than eating you alive ;)

    In a totally unrelated note: Hannes has some beautiful new results on segmentation using convolutional nets :)

    ReplyDelete

Post a Comment

Popular posts from this blog

Machine Learning Cheat Sheet (for scikit-learn)

A Wordcloud in Python

MNIST for ever....

Python things you never need: Empty lambda functions