Pixel6 magic eraser, pushed to the limit

I posted a quick picture on Mastdon of my Google Pixel 4 using my new Google Pixel 6 magic eraser feature.

Pixel 6 image

Here is the original shot, no edit no filters in my living room as I setup my Pixel 6.

This is the same picture just quickly wiping my finger over the Chromebook at the top right of the picture.

I guess I could have tried the other objects but I thought the reflection in my Pixel 4 would have looked very strange. The nice thing is I can go back and make that change at any time. So here is the that picture

Pixel 6 magic

If you hadn’t seen the other pictures, you might think the reflection is from objects much further away but knowing the fact it looks a bit strange.

magic erase looking strange

Finally magic erase can only go so far and you won’t get away with this picture at all.

Regardless of everything, its super fast and took longer for me to resize the photos (I reduced them down by 5x) on my laptop than use the tool. Computational photography has certainly stepped up a gear since my Pixel 2 days. I look forward to removing all those people who photo bomb my photos.

Google silently puts a knife into the Pixel 4

The view of the red moon through my window
Shot on a Google Pixel4 through my living room glass with nothing special

The Google pixel 4A looks like a really good phone and reminds me of the Nexus 5x in price and style. I won’t lie, the battery size and onboard storage certainly impressive compared to the Pixel4.

I’m still impressed with the Pixel4’s camera and its still good for me so far. But I noticed its currently leaving me with 50% battery at the end of the day. Its ok but remember I’m not really going out much at the moment. No idea what it would be like when I’m out and about again?

Its clear to me, that although I like the Pixel range, I would go for something like the One plus phone next time around. I mean look at the Pixel4a vs the Oneplus Nord?

One decision I have made is I will most likely this time around fit a new battery in the next 9 months. No idea why I didn’t do it for the Pixel2.

 

Computational photography, wow?

Dublin

I recently got the night shot upgrade on my pixel 2 smartphone.  Just as I got the chance to give it a try in Dublin (I only noticed it in Dublin).

As a Nikon D3200 digital SLR user, I’ve had a lot of respect for the lens, manual controls, etc… But the promise of computation photography is quite impressive.

Trinity College Dublin at night

Computational photography takes a swarm of data from images or image sensors and combines it algorithmically to produce a photo that would be impossible to capture with film photography or digital photography in its more conventional form. Image data can be assembled across time and space, producing super-real high-dynamic range (HDR) photos–or just ones that capture both light and dark areas well…

…But as much as computational photography has insinuated itself into all major smartphone models and some standalone digital cameras, we’re still at the beginning.

The images are pretty great  but thats only the start…

The coming developments will allow 3D object capture, video capture and analysis for virtual reality and augmented reality, better real-time AR interaction, and even selfies that resemble you more closely.

The idea of the smartphone capturing much more than just the light and sound is something very different from a DSLR cameras, and I don’t just mean a lytro light-field thing.  Understanding whats in the viewfinder, where it is, what angle, what material, etc, ect clearly puts them into a different category? To be fair my DSLR doesn’t even capture the location which is annoying when sorting out pictures later, for example finding the pictures from when I first went to Berlin in 1999.

There is a slight concern about using cloud based computational power. But with the pixel devices there is a special onboard chip providing local processing. This seems to be the way forward right now, although I heard others are hoping to do it via a 5G connection on a server. I can see half see why to do it, but as computational photography increases the load on the server will force companies to charge fees for server capacity. Expect extra charges added to your smartphone bill? Thats not including the costs of sending the data (but luckily the costs are dropping on this).

Its fascinating area but I wonder how much actual data will be shared with the creator/user? How much will be addressable by applications/services? Will it be locked up or funnelled out to the manufacturer, for their benefit only? I also wonder if computational photography plays havoc with the notion of the original untouched image?

Updated… Its clear although I’ve been talking about Computational photography, it should actually be called Computational capture.