Computational photography is just the start

Tree scene with sunlight
Far Cry 5 / A Run in the Park

I found it interesting  to read how Virtual Photography: taking photos in videogames could be imaging’s next evolution. A while ago I mentioned how computational photography was pretty stunning a while ago when using my Google Pixel 2’s night sight mode.

Theres a project BBC R&D have been working on for a while, which fits directly into the frame of computational media. We have named it REB or Render Engine Broadcasting. Like OBM, Object based media theres a lot of computational use in the production of media, but I think theres a ton of more interesting research questions aimed at the user/client/audience side.

Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.

Hellblade: Senua's Sacrifice / Up There

Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with.  Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.

Theres also many good things about computational media too, beyond the sheer consumption.

While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!

Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.

Its a new type of media, with new sensibility and a new type of craft…

Of course its not all perfect.

https://twitter.com/iainthomson/status/1165755171923587072

Computational photography, wow?

Dublin

I recently got the night shot upgrade on my pixel 2 smartphone.  Just as I got the chance to give it a try in Dublin (I only noticed it in Dublin).

As a Nikon D3200 digital SLR user, I’ve had a lot of respect for the lens, manual controls, etc… But the promise of computation photography is quite impressive.

Trinity College Dublin at night

Computational photography takes a swarm of data from images or image sensors and combines it algorithmically to produce a photo that would be impossible to capture with film photography or digital photography in its more conventional form. Image data can be assembled across time and space, producing super-real high-dynamic range (HDR) photos–or just ones that capture both light and dark areas well…

…But as much as computational photography has insinuated itself into all major smartphone models and some standalone digital cameras, we’re still at the beginning.

The images are pretty great  but thats only the start…

The coming developments will allow 3D object capture, video capture and analysis for virtual reality and augmented reality, better real-time AR interaction, and even selfies that resemble you more closely.

The idea of the smartphone capturing much more than just the light and sound is something very different from a DSLR cameras, and I don’t just mean a lytro light-field thing.  Understanding whats in the viewfinder, where it is, what angle, what material, etc, ect clearly puts them into a different category? To be fair my DSLR doesn’t even capture the location which is annoying when sorting out pictures later, for example finding the pictures from when I first went to Berlin in 1999.

There is a slight concern about using cloud based computational power. But with the pixel devices there is a special onboard chip providing local processing. This seems to be the way forward right now, although I heard others are hoping to do it via a 5G connection on a server. I can see half see why to do it, but as computational photography increases the load on the server will force companies to charge fees for server capacity. Expect extra charges added to your smartphone bill? Thats not including the costs of sending the data (but luckily the costs are dropping on this).

Its fascinating area but I wonder how much actual data will be shared with the creator/user? How much will be addressable by applications/services? Will it be locked up or funnelled out to the manufacturer, for their benefit only? I also wonder if computational photography plays havoc with the notion of the original untouched image?

Updated… Its clear although I’ve been talking about Computational photography, it should actually be called Computational capture.