Think it’s hard to adapt your content to mobile, tablet, and desktop? Just wait until you have to ask how this will also look on the smart TV. Or the refrigerator door. Or on the bathroom mirror.
They’re all coming…if they aren’t already here. It doesn’t take much imagination or deep reading of the tech press to know that in 2013 more and more devices will connect to the internet and become another way for people to consume internets.
We’ll see the first versions of Google’s Project Glass in 2013. A set of smart glasses will put the internet on a user’s eyes for the first time. Reaction to early sneak peeks is a mix of mockery and amazement, mostly depending on your propensity for tech lust. We don’t know much about them, other than some tantalizing video, but Google is making them, so it’s a safe bet that Chrome For Your Eyes will be in there. And that means some news organization in 2013 is going to ask: “How does this look jammed right into a user’s eyeballs?”
Stop! Nieman labs is forgetting something major! And I could argue they are still thinking in a publishing/broadcasting mindset
Yes the C word, Context…
Ironically this is something Robert Scoble actually gets in his blog post, The coming automatic, freaky, contextual world and why we’re writing a book about it.
A TV guide that shows you stuff to watch. Automatically. Based on who you are. A contextual system that watches Gmail and Google Calendar and tells you stuff that it learns. A photo app that sends photos to each other automatically if you photograph them together. And then there’s the Google Glasses (AKA Project Glass) that will tell you stuff about your world before you knew you needed to know. There is a new toy coming this Christmas that will entertain your kids and change depending on the context they are in (it will know it’s a rainy day, for instance, and will change their behavior accordingly)
Context is whats missing and in the mindset of pushing content around (broadcast and publishing) and into peoples faces, responsive design sounds like a good idea. Soon as you add context to the mix, it doesn’t sound so great. Actually it sounds damm right annoying or even intrusive? I do understand its the best we got right now, but as sensors become more common, we’ll finally be able to understand context and hopefully be able to build perceptive systems.
We already demonstrated, sensors don’t have to be cameras, gyroscopes, etc. The referral, operating system, screen resolution, cookies, etc all are bits of data which can (some maybe less that others) be used to understand the context.
I can come up with many scenarios where the responsive part gets in the way, unless you are also considering the context. In a few years time, we’ll look back at this period of time and laugh, wondering what the heck were we thinking…
I’m with Scoble on this one… Context and Content are the Queen and King.