The exciting future of photography

There’s a cool discussion over at Photocritic regarding the future of photography. Are revolutionary changes in the pipeline? Or will current concepts just evolve?

I think it’ll be a matter of increased data collection. In the same way that a RAW photo currently (kinda) stores an extra two stops of exposure data, I think future sensors will be able to capture far more than just the final image. Quickly taking a range of exposures wouldn’t be a problem with current technology, and some decent post-processing software would be able to even out any differences due to motion-blur. Couple this with the plenoptic camera, which can focus after the event, and you’ve got a quasi-3d recreation of the scene inside your computer, and you can compose your photo after the effect. Would this detract from the skill of taking photographs ‘in the field’? Possibly, in some respects. But so what?

It brings up an point I’ve been wondering about: is there any physical reason why current CCDs require roughly the same exposure times as film? Is it that they became commercially viable once the technology reached the level of film, or is there more to it? Can we expect future CCDs to be incredibly light sensitive? Will grainy low-light photography become a thing of the past? Or is it a physics thing – we simply need x number of photons to resolve an image? My limited physics knowledge suggests that limit is a way off, but I could be wrong. Could future CCDs capture the full dynamic range available to the eye?

A commenter brought up an interesting extension to this, something that I haven’t thought about for a while: what’s with film and CCDs being worse than the eye anyway? Admittedly the eye isn’t actually as good as we think – what we see as a high dynamic range visual field is as much to do with our visual system continually re-exposing and filling in the image with as much detail as possible – but nevertheless it’s still superior to film / CCDs. I think. A modern CCD can handle, what, seven stops of exposure in a single image? Could this be increased? Hell, what’s to stop us eventually putting rod and cone cells onto a sensor? In an octopus-like manner, of course – the human setup would be silly.

Other commenters have ideas regarding photography as a social tool. Future photos will no doubt contain both time and GPS data. Flickr already supports searching by tag, place and time. A massive distributed network of such information would be a powerful tool against crime and for the seeking of wonderful things. Add video and sophisticated face-recognition software into the mix and things go mental. There’d be implications for privacy, as well as how much coolness the brain can handle.

I don’t think any of this is way out there. Some of it is way closer than the horizon. I think it’s a very exciting time to be alive and into photography.