Computational photography, wow?

Dublin

I recently got the night shot upgrade on my pixel 2 smartphone.  Just as I got the chance to give it a try in Dublin (I only noticed it in Dublin).

As a Nikon D3200 digital SLR user, I’ve had a lot of respect for the lens, manual controls, etc… But the promise of computation photography is quite impressive.

Trinity College Dublin at night

Computational photography takes a swarm of data from images or image sensors and combines it algorithmically to produce a photo that would be impossible to capture with film photography or digital photography in its more conventional form. Image data can be assembled across time and space, producing super-real high-dynamic range (HDR) photos–or just ones that capture both light and dark areas well…

…But as much as computational photography has insinuated itself into all major smartphone models and some standalone digital cameras, we’re still at the beginning.

The images are pretty great  but thats only the start…

The coming developments will allow 3D object capture, video capture and analysis for virtual reality and augmented reality, better real-time AR interaction, and even selfies that resemble you more closely.

The idea of the smartphone capturing much more than just the light and sound is something very different from a DSLR cameras, and I don’t just mean a lytro light-field thing.  Understanding whats in the viewfinder, where it is, what angle, what material, etc, ect clearly puts them into a different category? To be fair my DSLR doesn’t even capture the location which is annoying when sorting out pictures later, for example finding the pictures from when I first went to Berlin in 1999.

There is a slight concern about using cloud based computational power. But with the pixel devices there is a special onboard chip providing local processing. This seems to be the way forward right now, although I heard others are hoping to do it via a 5G connection on a server. I can see half see why to do it, but as computational photography increases the load on the server will force companies to charge fees for server capacity. Expect extra charges added to your smartphone bill? Thats not including the costs of sending the data (but luckily the costs are dropping on this).

Its fascinating area but I wonder how much actual data will be shared with the creator/user? How much will be addressable by applications/services? Will it be locked up or funnelled out to the manufacturer, for their benefit only? I also wonder if computational photography plays havoc with the notion of the original untouched image?

Updated… Its clear although I’ve been talking about Computational photography, it should actually be called Computational capture.