Needed changes are not going to come from Apple & Google

Android 16 vs. iOS 26: Why Apple's redesign falls flat

Its been all over the news recently.

Apple’s stunning ‘Liquid Glass’ design could change everything and Andorid’s material 3 design change.

These user interface changes I have lots of thoughts about them from a design and UI point of view (most which has been said elsewhere). However my biggest thought is the underlying problems of our smartphones and our tired notions. (especially since finishing the book The Anxious Generation by Jonathan Haidt and the The Chaos Machine by Max Fisher.

After all the hype and attention, I did fiid the book reasonable. There are parts I did question and frown at but generally it’s not as ground breaking as the press made out.

(comment on The Anxious Generation)

It strikes me, these are the horrible phase of a pig with a touch of lipstick.

The whole way we use smartphones is broken, I’m not that excited about glassy or blobby elements but it feels like were not getting to the root of the issue. Abuse of user/owner data, lack of user/owner agency and the mass surveillance of millions of people through their smartphones can’t got on as its has…

I’d love to see a new paradigm in the same way both companies have tried to tackle the huge rise in smartphone thief’s. Its not like we don’t have the technology to provide advanced protections for user data but rather each one (Apple & Google) benefit from access to the data. I know people will say yes Google but Apple protects user data access? Likely they do but then become the gatekeeper to your data means they can also offer it to trusted parties?

However this isn’t about that question, I’m questioning why so much work has gone into the UI and not into how to make reconsidering the problems of how we use them?

What are Android App permissions, and how do devs implement them?
Although old, this still feels clunky and could be done much better with better integration with the operating system

Using the scoped storage as a example. This limits an application to a certain space on the file system. iOS and now Android support this, but its a little clunky and almost encourages the owner of the phone to just accept all (This old tread highlights the problem).

Android recently put more emphasis on modes (basically profiles, which have been tried over and over again). It wouldn’t be difficult to tie modes to permissions too? The difference could be the user interface? I don’t have solid answers but I think about when (rarely) my Pixel goes into power saving mode when the battery is less than 20%. There is a visual UI clue but also it restricts the background data use too. I have heard about people turning on extreme power saving mode always for many reasons.

Some of you might say so what? This isn’t permissions and data but ultimately its the combination which is important. Its almost like their aim is just shift more new phones, regardless of the result… Of course! Some of you may say hey Ian, what would you change and how? My answer is simply I could rethink a bunch of things and I’m sure some of their teams already have but as usual its so low on the list as it doesn’t sell phones. Or even maybe they are waiting for regulation to force them to make the change?

How to Disable Gemini in Gmail and Other Google Workspace Apps - Make Tech Easier

I reflect on the AI/Gemini changes in Android and Google services, maybe I would like to use it for a limited scope of things and accept the results won’t be as great. But my only option is accept or decline. In 2025 this is bad and needs changing, heck I love for designers to take up the challenge of making this all seamlessly work with the ability to negotiate and change the scope at any time.

Really need to see Human Data Interaction replace Human Computer Interaction now because its become unacceptable in my eyes. Worst still it limits whats possible and leads to a outcome which doesn’t empower the owner/users.

Imagine XBMC with Leap…

Ever since the microsoft kinect was hacked to work with non xbox machines, xbmc hackers have been messing or modifying there setups to support gesture control. So popular was the idea of controlling media with gestures, even the BBC adopted this in the Xbox version of iplayer. However the limits of the kinect was being discovered by the XBMC hackers.

After the first rush for controlling media using your whole body, came the idea of using just your arm then finally just the hand. But the Microsoft kinect didn’t have the density to support this. Now leap motion have brought out their own kinect style solution.

XBMC users should love Leapmotion specially with driver support for windows, mac and Linux.

Supporting not only fingers but even pencils and pens too. all the things needed to really make the xbmc interface amazing.

Pictures Under Glass and nothing else

Hands Manipulate things

Can’t believe I’ve not blogged about this epic rant about The Future Of Interaction Design.

Tony highlighted it to me and for once, we were in agreement… The future of interaction design is not glass interfaces and pictures/icons/pictograms under it

Theres some great examples of why pictures under glass doesn’t work…

Okay then, how do we manipulate things? As it turns out, our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. In each of these pictures, pay attention to the positions of all the fingers, what’s applying pressure against what, and how the weight of the object is balanced:

Absolutely the amount of gestures, positions and repertoire fingers can perform is dazzling.

Remember the saying for tricks, the hand is quicker than the eye

So what is the Future Of Interaction?

The most important thing to realize about the future is that it’s a choice. People choose which visions to pursue, people choose which research gets funded, people choose how they will spend their careers.

I could jump in with a rant about how certain companies are limiting choice but I won’t…

I believe that hands are our future…

Absolutely! 3 things pointing at the early future of the interaction…

  1. Printable circuits
  2. Conductive Fabric
  3. Squishy Circuits