Computational photography is just the start

Tree scene with sunlight
Far Cry 5 / A Run in the Park

I found it interesting  to read how Virtual Photography: taking photos in videogames could be imaging’s next evolution. A while ago I mentioned how computational photography was pretty stunning a while ago when using my Google Pixel 2’s night sight mode.

Theres a project BBC R&D have been working on for a while, which fits directly into the frame of computational media. We have named it REB or Render Engine Broadcasting. Like OBM, Object based media theres a lot of computational use in the production of media, but I think theres a ton of more interesting research questions aimed at the user/client/audience side.

Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.

Hellblade: Senua's Sacrifice / Up There

Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with.  Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.

Theres also many good things about computational media too, beyond the sheer consumption.

While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!

Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.

Its a new type of media, with new sensibility and a new type of craft…

Of course its not all perfect.

https://twitter.com/iainthomson/status/1165755171923587072

A blast from the past: Persistence of Vision Raytracer

Povray rendering glasses

I was listening to FLOSS weekly with the guy who actually created POV Ray (persistence of vision raytracer). It was amazing to listen to because, I along time ago use to run it on my old Atari ST. At the time I never had access to anything else, and frankly everything else was simply crap in when compared to PovRay's efforts. I believe there were all of about 4 3D rendering programs on the Atari 16bit platform and to be honest the ability to write images and animations using a simple notepad application was insane but ever so useful at the time. After a long while I built my first PC which was a 233mhz beasty and PovRay was one of those benchmark software which I used to prove to myself the investment. I could only dream how fast it would be to render scenes on my current workstation and laptop.

The author of POV Ray in the podcast talks about how he made the software freeware and wrote a basic license saying your welcome to modify it but if you do make a change please send it back to the author. This was before the word open source was around and even before the web had taken hold, so POV Ray was distributed on floppy discs, CDs and BBS. It was written before licenses like BSD, GPL and Apache were common, although PovRay 4 is going to be rewritten under the GPL 3 license.

PovRay isn't dead actually there starting to add some well needed features like native mutliprocessor support. In the past you would specify a part of the final image to do on one machine/cpu and the other bit on the other machine/cpu. This may sound very bizarre for a heavy duty raytracing engine but when you had a room full of computers like we sometimes had at college, it meant we could run renders of sizes like 1600×1200 and split the picture up into 4 pieces of 800×600, which were then run over 4x Pentium P133 machines.

The other thing I loved about PovRay was its realism, for year and years I argued that 3Dstudiomax, Lightwave, etc's results were poor compared to PovRay. The main reason was that this applications use to render results not raytrace them. This was why PovRay took so long to render scenes, like the one above. But for the hardcore, PovRay also had true Radiosity support

Actual writing PovRay scenes involves picturing in your mind 3D space and then mapping things based on that space. We use to graph things out on a graph paper and then translate it into C like syntax. It sounds more difficult that it actually is and before long your up and going. I just wish I could find some of my old scenes. Oh the language is a turing-complete language that supports macros and loops. So you can most of the time program effects using maths and logic that by hand.

Comments [Comments]
Trackbacks [0]