Google silently puts a knife into the Pixel 4

The view of the red moon through my window
Shot on a Google Pixel4 through my living room glass with nothing special

The Google pixel 4A looks like a really good phone and reminds me of the Nexus 5x in price and style. I won’t lie, the battery size and onboard storage certainly impressive compared to the Pixel4.

I’m still impressed with the Pixel4’s camera and its still good for me so far. But I noticed its currently leaving me with 50% battery at the end of the day. Its ok but remember I’m not really going out much at the moment. No idea what it would be like when I’m out and about again?

Its clear to me, that although I like the Pixel range, I would go for something like the One plus phone next time around. I mean look at the Pixel4a vs the Oneplus Nord?

One decision I have made is I will most likely this time around fit a new battery in the next 9 months. No idea why I didn’t do it for the Pixel2.

 

Computational photography, wow?

Dublin

I recently got the night shot upgrade on my pixel 2 smartphone.  Just as I got the chance to give it a try in Dublin (I only noticed it in Dublin).

As a Nikon D3200 digital SLR user, I’ve had a lot of respect for the lens, manual controls, etc… But the promise of computation photography is quite impressive.

Trinity College Dublin at night

Computational photography takes a swarm of data from images or image sensors and combines it algorithmically to produce a photo that would be impossible to capture with film photography or digital photography in its more conventional form. Image data can be assembled across time and space, producing super-real high-dynamic range (HDR) photos–or just ones that capture both light and dark areas well…

…But as much as computational photography has insinuated itself into all major smartphone models and some standalone digital cameras, we’re still at the beginning.

The images are pretty great  but thats only the start…

The coming developments will allow 3D object capture, video capture and analysis for virtual reality and augmented reality, better real-time AR interaction, and even selfies that resemble you more closely.

The idea of the smartphone capturing much more than just the light and sound is something very different from a DSLR cameras, and I don’t just mean a lytro light-field thing.  Understanding whats in the viewfinder, where it is, what angle, what material, etc, ect clearly puts them into a different category? To be fair my DSLR doesn’t even capture the location which is annoying when sorting out pictures later, for example finding the pictures from when I first went to Berlin in 1999.

There is a slight concern about using cloud based computational power. But with the pixel devices there is a special onboard chip providing local processing. This seems to be the way forward right now, although I heard others are hoping to do it via a 5G connection on a server. I can see half see why to do it, but as computational photography increases the load on the server will force companies to charge fees for server capacity. Expect extra charges added to your smartphone bill? Thats not including the costs of sending the data (but luckily the costs are dropping on this).

Its fascinating area but I wonder how much actual data will be shared with the creator/user? How much will be addressable by applications/services? Will it be locked up or funnelled out to the manufacturer, for their benefit only? I also wonder if computational photography plays havoc with the notion of the original untouched image?

Updated… Its clear although I’ve been talking about Computational photography, it should actually be called Computational capture.