NIPS 2010 - Perceptual Bases for Rules of Thumb in Photography

The first invited talk today was Martin Banks talking about "Perceptual Bases for Rules of Thumb in Photography". It was a psychological talk, it does not directly have any thing to do with machine learning or computer vision but it was nevertheless very interesting - even more though as I am a hobby photographer.
The overall theme was how people perceive photographs and pictures on screens and what are the geometric and psychological sources of some effects.
In particular he focused on three topics:
  • Wide angle distortion
  • Depth compression and expansion
  • Depth of field effects
The first part about wide angle distortion is about the effect that images that are captured with wide angles seem distorted. This is the reason why in portrait and beauty photography usually a long focal lens is used.
From a projective geometry point of view, it is easy to see why this happens - and it is geometrically correct. But we still perceive it as "wrong". The question is: Why?
Usually when we look at a photograph, we are not in the center of projection of the image (which is somewhere behind the film in the camera) but we view it from an angle. Still it does not seem distorted to us.
Banks argues that this is caused by the brain adjusting for the different viewpoint and transforming the image back on a plane. He did some experiments showing people slanted images trough a pinhole and just plain. The percept was different through the pinhole, when the subjects didn't know that they where viewing the image from an angle but the perception was corrected when they saw the context.
From what I understand, Banks argued that when we adjust the viewpoint in our brain, the information from the focal length is lost and we perceive an image as distorted.
There is a recommendation in photography that you should always use at least a 50mm focal length to do portraits. So where does that come from? The focal length directly influences the angle of view of an image. Banks showed empirically that the angle of view of a 50mm (actually 48mm) lens is exactly the angle at which people see the deformation of a circle in the world to an ellipse on the image plane. This would explain this recommendation.

The second part was about depth compression. Banks showed some experiments in which he tested peoples preferred viewing distance of a photograph. He expected to see it being correlated with the focal length of the shot - so that we position ourself in the center of projection. But what he found instead was that people adjusted the viewing distance according to the size of the image only.

The third part was about depth of field effects and its role in depth perception.
He showed some "tilt shift" images, where an artificial blur is introduced into a picture to make the scene look smaller.
The opposite effect is used in cinematography, where miniature scenes are (or maybe more accurately were) lighted a lot so that it is possible to capture them using a very large aperture. This makes for very crisp images and makes the model look big.
Banks went on to explain an experiment, where subjects where shown artificially blurred versions of stereo images. The images also had strong perspective cues. When asked which amount of blur was natural, participants had no problem to pick out the original image.
Even without the stereo information, for me the answer seemed clear from just looking at the image.

This was just a very rough description of the talk that contained many nice images which I sadly don't have. I hope I could still share some of the insights of the talk and convince you that it is worth watching the talk, once it is available on videolectures.

Comments

  1. Interesting. So basically our brain can automatically, and without our conscious, adjust what we see according to what we already know.

    How's the weather in Vancouver? It's horrible here in Helsinki.. it's been snowing for more than two weeks now..

    Cho

    ReplyDelete
  2. Weather in Vancouver is ok. Kind of rainy. But at least not as freezing cold as last year.
    I'm gonna write some more about nips when I come back. There is some REALLY interesting stuff happening here.
    Andy

    ReplyDelete

Post a Comment

Popular posts from this blog

Machine Learning Cheat Sheet (for scikit-learn)

A Wordcloud in Python

MNIST for ever....

Python things you never need: Empty lambda functions