Pixel6 magic eraser, pushed to the limit

I posted a quick picture on Mastdon of my Google Pixel 4 using my new Google Pixel 6 magic eraser feature.

Pixel 6 image

Here is the original shot, no edit no filters in my living room as I setup my Pixel 6.

This is the same picture just quickly wiping my finger over the Chromebook at the top right of the picture.

I guess I could have tried the other objects but I thought the reflection in my Pixel 4 would have looked very strange. The nice thing is I can go back and make that change at any time. So here is the that picture

Pixel 6 magic

If you hadn’t seen the other pictures, you might think the reflection is from objects much further away but knowing the fact it looks a bit strange.

magic erase looking strange

Finally magic erase can only go so far and you won’t get away with this picture at all.

Regardless of everything, its super fast and took longer for me to resize the photos (I reduced them down by 5x) on my laptop than use the tool. Computational photography has certainly stepped up a gear since my Pixel 2 days. I look forward to removing all those people who photo bomb my photos.

Computational photography is just the start

Tree scene with sunlight
Far Cry 5 / A Run in the Park

I found it interesting¬† to read how Virtual Photography: taking photos in videogames could be imaging’s next evolution. A while ago I mentioned how computational photography was pretty stunning a while ago when using my Google Pixel 2’s night sight mode.

Theres a project BBC R&D have been working on for a while, which fits directly into the frame of computational media. We have named it REB or Render Engine Broadcasting. Like OBM, Object based media theres a lot of computational use in the production of media, but I think theres a ton of more interesting research questions aimed at the user/client/audience side.

Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.

Hellblade: Senua's Sacrifice / Up There

Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with.¬† Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.

Theres also many good things about computational media too, beyond the sheer consumption.

While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!

Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.

Its a new type of media, with new sensibility and a new type of craft…

Of course its not all perfect.

https://twitter.com/iainthomson/status/1165755171923587072