The experimental Lytro camera records “light fields” which allow you to refocus images after you’ve shot them.

AllThingsD has a decent explanation of how this works:

Lytro’s camera works by positioning an array of tiny lenses between the main lens and the image sensor, with the microlenses measuring both the total amount of light coming in as well as its direction.

This suggests a major compromise in image quality, so perhaps it’s the advent of ultra-high resolution CCDs that is making this possible now. The simulations show immense promise.

For narrative media, this creates a fascinating opportunity…

Traditional photography imposes a single point of focus for an audience. As movie-goers we are used to that restriction and it is even an aesthetically pleasing effect.

For editors this adds yet another dimension to our concerns.

It used to be that we just worried about the two spatial dimensions (x, y) to tweak framing.

Then there’s the third dimension, time (t), which is the main preoccupation of traditional editors, further complicated by time-stretching tools like Twixtor.

Then there’s digital colour manipulation – another three dimensions (R, G, B).

And now we can set focus in the edit – our third spatial dimension (z), and sixth overall.*

For interactive media, we as viewers don’t want to be guided like that. In fact we want to tell the camera where to look. By combining Lytro-like image capture with retinal tracking, interactive films will be able to adjust the focal point as we glance around the screen.

This also means more flexible 3D and an evolution that will take us beyond the current 150-year old paradigms.

Regardless of Lytro’s fortunes, the theory behind it is unquestionably revolutionary.

——————————

* Editors who do compositing will tell you we’ve been working in the z-dimension for a long time.