Perceptive theme park rides?

Tony tweeted me about this thrill machine which uses body data to influence how the ride operates. The link comes from Mashable and I was able to trace it back to the original

“…while building this attraction I also wanted to change the usual one-sided relation – a situation where the body is overwhelmed by physical impressions but the machine itself remains indifferent, inattentive for what the body goes through. Neurotransmitter 3000 should therefore be more intimate, more reciprocal. That’s why I’ve developed a system to control the machine with biometric data. Using sensors, attached to the body of the passenger – measuring his heart rate, muscle tension, body temperature and orientation and gravity – the data is translated into variations in motion. And so, man and machine intensify their bond. They re-meet in a shared interspace, where human responsiveness becomes the input for a bionic conversation.”

https://danieldebruin.com/neurotransmitter-3000

Its a good idea but unfortunately couldn’t work on a rollercoasters, which is my thing. Or could it? For example everyones hand up in the air means what? The ride goes faster? How on earth does work? How meaningful would this be if you could actually do this?

Its one of the research questions we attempted to explore in the living room of the future. How can you combine different peoples personal data to construct a experience which is meaningful and not simply a medium of it all.

These global changes don’t seem meaningful or so useful? Maybe its about the micro changes like mentioned previous.

Of course others have been working around this type of things too.

Join us in exploring object-based media making tools


Like visual perceptive media? Like the concept of perceptive radio, like the javascript libraries we have put out in a open and public way? We want you to come on board and join us…!

We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.

The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.

The core of the workshop will focus on the question…

Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?

During the backdrop of the International Sheffield Documentary Festival the workshop on Monday 13th June will bring together, and are putting out a call for interested parties to work together with the aim of understanding how to develop tools which can benefit storytellers, designers, producers and developers.

We are calling for people, universities, startups, hackers and companies with a serious interest in opening up this area; to reach out and join us. Apply for a ticket and we will be in touch.

Programmatic media sounds a bit like Perceptive Media?

Kill Bill Advertising

I swear Tony sent me a tweet with a pointer to this piece titled Programmatic Beyond Advertising: A Not-So-Distant Future in CMF trends.

Its mainly about advertising including a bit about the just in time advertising space which is coming about because of the lightening speed of data and the ability to replace advertising/content on the fly.

Heard it all before but then there was this part…

…what if programmatic could be used for content other than advertising?

If we extend this thinking (and our imagination) a little further to consider the possible emergence of a new distribution method for cultural or editorial content based on programmatic logic and methods, we could ask whether these new “programmatic” models could be applied to the automated distribution of film and television content based on audiences and their data.

Based on this logic, “programmatic content distribution” could be imagined as a flow in which the data collected from users would trigger an automated rights transaction and content delivery process between right-holders and broadcasters. The final result would be the broadcasting of content corresponding to the preferences of the targeted user.

Yes indeed, this is the start of Perceptive Media, if you haven’t already guessed. Its always good to hear others make the same leaps in thinking of course…

Perceptive media in wired magazine

Programmatic media?  Don’t think that will fly as a term, I’m sorry to say. Although I have to say, this description would be more like responsive media than perceptive media.

It was in make do share warsaw that I first heard Lance Weiler talk about them in quite different contexts and it did make sense. Phil has been grouping them together as contextual media which works as a superset of both, although I worry about the previous examples of contextual media clouding things.

The next part of the article I’m less interested in but something I have thought about a tiny bit…

Moreover, it would be possible to monetize this video content by attaching superimposed or pre-roll ads to it, as commonly seen on video aggregation platforms.

This valuable collection of user data and preferences for viewing a movie or television show could be done on a voluntary basis; for example, users would simply answer a few questions on their mood, the type of movie or series, and the desired language and duration so that the platform can preselect and “program” content that meets their criteria.

But we know that the Web, which is very advanced in big data collection, is already capable of gathering this data using algorithms. Users’ actions on a given site—the keywords they search for, the links they click on, their daily search history—can indicate to the platforms what type of content they are likely to be interested in.

The problem they will get is the explicit nature of the input, I feel. Yes its easier on the web but the person is leaning forward interacting most of the time anyway. When you get into the living room it gets a little more tricky, and a implicit approach is better in my mind. Yes it can get creepy but it doesn’t break the immersion and in my mind thats very key.

The essence of the programmatic distribution mechanism would therefore be as a recommendation super-engine, more sophisticated than that currently found on various platforms.

Why is it everybody thinks fancy recommendation engines? If this is the ambition of the industry, I feel we should be breaking into another dimension. Hopefully some of the things I’m responsible for will match that ambition/moon shot.

Is the future of user interface design actually, perceptive?

Jason Silva in his latest shot of awe, talks about the paradox of choice we all face with the advances in technology and increase choice. He also mentioned the fast company piece about the trend towards less choice, especially in user interface design.

Companies are catching on quickly. With the realization that data is much more valuable when used with other information, protocol is increasingly being adopted to ensure that data sharing is seamless. With the explosion of both data collection and unification, we’re creating an environment that, while not fully exposed, is at least open enough for information to be meaningfully aggregated.

Taken together in four steps—collection, unification, analysis, and implementation—we have an environment where information is working for you behind the scenes to do things automatically, all in the service of letting you focus on what’s most important to you in work and life.

I have concerns about this along with my thoughts about who/whom is writing the software and what is their opinion?

What Jason and others are talking about is contextual design or as I prefer perceptive design (along with perceptive media). As context only explains half of the solution and frankly anticipatory design sounds like when I first talked about intrusive media.  It will never find the mindshare with a name like that!

I think of Apple products as anticipatory and antihacker. I remember the blog I wrote when I saw Aral talk about user experience at Thinking Digital in 2013.

Perceptive design needs to empower people with  chances and experiences for mastery, not enslave them and ultimately make them feel trapped, lost and cut off from others.

Variations not versions

https://twitter.com/martynkelly/status/624266599000838150

It was Si Lumb who tweeted me about Pixar’s Inside Out contextual visuals.

Now I know this isn’t anything new, I mean films have had region differences for a long while but its good to see it discussed openly and I was interesting to read about how (we think) they do it.

It’s interesting to note that the bottom five entries of the list, starting with “Thai Food,” remain consistent throughout (maybe Disney/Marvel Studios’ digital wizards couldn’t replace the stuff that Chris Evans’ hand passed over), but the top items change a lot.

Which leads me to think its all done in post production using things like impossible software?

Post producing this stuff is a mistake in my mind, but then again I’m working on the future of this kind of thing with Perceptive Media. I also imagine the writer and director had no time to think about variations for different countries, or wasn’t paid enough?

Rather than write up my thoughts of how to do this with digital cinema (isn’t this part of the promise of digital cinema?) plus I’m writing a paper with Anna frew about this. I thought it was about time I wrote something about the project I’m currently working on.

Visual Perceptive Media

Visual perceptive media is a short film which changes based on the person who is watching the video. It uses profiled data from a phone application to build a profile of the user via their music collection and some basic questions. The data then is used to inform what variations it should apply to the media when watched.

The variations are applied in real time and include different music, different colour grading, different video assets effects and much more. Were using the WebAudioAPI, WebGL and other open web technologies.

What makes this different or unique…?

  • We had buy in with the script writer and director (Julius Amedume was both and amazing) right from the very start which makes a massive difference. The scripts were written with all this in mind.
  • It was shot and edited with its intended purpose of making real-time variations.
  • Most things we (BBC R&D) have done in the responsive/perceptive area has been audio based and this I would say is a bit of moonshot moment like Breaking Out 3 years ago! Just what I feel the BBC should be doing.
  • Keeping with the core principle of Perceptive media, the app which Manchester based startup Percepiv (was moment.us, wondered if working with us had a hand in the name change?) created using there own very related technology. Is mainly using implicit data to build the profile. You can check out music+personality on your own android and iphone now.

Its going to be very cool and I believe we the  technology has gotten to the point where it makes sense that we can do this so seamlessly that people won’t even know or realise (this is something we will be testing in our lab). As Brian McHarg says, theres going to be some interesting water cooler conversations, but the slight variations are going to be even more subtle and interesting.

This is no branching narrative

I have been using the word variations throughout this post because I really want us to get away from the notion of edits or versions. I recently had the joy of going Learn Do, Share Warsaw. I was thinking about how to explain what our thinking was with the Visual Perceptive Media project. How do you explain which has 2 films genres with 6 established endings with 20+ types music genres and a endless number of lengths and effects?

This certainly isn’t a branching narrative and the idea of branching narrative is certainly not apt here. If this was a branching narrative, it would have upwards of 240 versions not including any of the more subtle effects to increase your viewing enjoyment. I considered them as variations and the language works, when you consider the photoshop variation tool. This was very handy when talking to others not so familiar with perceptive media.  But its only a step and makes you consider there might be editions…

I was talking to my manager Phil about it before heading to Warsaw and came up with something closer to the tesseract/hypercube in interstellar (if you not seen it/spoiler alert!)

Unlimited Variations

Unlimited isn’t quite right but the notion of time and variations which intersect is much closer to the idea. I say to Si Lumb maybe the way to show this would be in VR, as I certainly can’t visualise it easily.

When its up and running I’d love people to have a go and get some serious feedback.

On a loosely related subject, Tony Churnside also tweeted me about Perceptive Media breaking into the advertising industry.

Perceptive advertising is coming…?

not too much h20

Google wants to bring TV ads into the 21st century. The company has quietly announced a new local advertising service for Google Fiber that will make TV ads behave a lot more like internet ads. Using data from its set-top-boxes, Google (and advertisers) will know precisely how many times a particular local ad has been watched in homes with Google Fiber service. That might not sound like a big deal, but the industry-standard Nielsen ratings simply don’t offer that kind of information. Like on the web, Google will only charge for the number of views an ad receives.

We all knew it was coming but I always wondered why Google and the other data driven companies hadn’t really done anything about the massive opportunity of personalised marketing?

It’s not yet clear precisely how the system will work, but, similar to Google’s cornerstone AdWords business, algorithms might determine the best time to show you a certain ad. For instance, if you’re watching the news before flipping over to the football game, the system might determine that you should be served a different ad during halftime than your buddy who switched over to the game from Pawn Stars. Google says it will even be able to swap out ads on DVR’d programs, so you won’t be served an old or irrelevant advertisement if you watch a program a week after it originally aired. Fiber customers will have an option to disable ads based on viewing history

But that is just the start. There is still the notion that the adverts are solid pieces for media which must be played from start to the end. This is a mistake, which will break down over time. Context is king yes, but there is big question about how personal you should get?

Something Doc Searls talks a lot about… and cue the Uncanny Valley graph

Uncanny_valley

I am worried that in the rush to deliver context sensitive advertising and marketing, there will be too much which falls into the uncanny valley space. So much it will ruin the great uses of data and context like Perceptive Media. I always said it was little friendly touches not a sledgehammer to the face or other senses…

Perceptive Radio on BBC Radio 4

Official Perceptive Radio photo

It finally happened… Perceptive Media  and more specifically Perceptive Radio got a mention on BBC Radio 4’s You and Yours today. Now to be fair this isn’t the first time its been mentioned on the BBC but to have futurebroadcasts.com mentioned live on air, should increase the sample size for feedback which is critical for our research into Perceptive Media.

In usual style I made an archived version on archive.org. Although to be fair the You and Yours stays on iPlayer for about a year at a time.

Perceptive music and beyond

Pet Shop Boys at the Brits 2009

Media relies on the ability to engineer peoples emotions. This can sound pretty bad but all media from romantic comedies made for cinema to the old classics from Shakespeare. The effect of media and ultimately storytelling has always fascinated me and I’m sure its the same for most people. Its hardwired in to us as Jason Silva puts it.

The ability to engineer someone’s emotions is interesting from a story point of view. However if you add broadcast, you can do this to a nation or the whole world. But like the 10% of any audience, which are highly suggestible, how do you reach the others?

A 600,000 person study Facebook and Cornell University did a while back but recently came to light might have a clue about how. However there has been a major push-back on the study for ethical reasons.

Facebook’s controversial study that manipulated users’ newsfeeds was not pre-approved by Cornell University’s ethics board, and Facebook may not have had “implied” user permission to conduct the study as researchers previously claimed.

Starting from a different place is Moment.us.(little disclaimer to say I may be working with this Manchester based startup in the near future, but only because their technology is mind blowing)

Moment.us, tracks and follows the users media habits. It watches as you choose songs (bit like scobbling apps like last.fm) when you pick them and records the context of when. Like certain types of song when your going for a ride to work on a sunny day.

Our proprietary algorithm, contextual database, analytics, understanding of and expertise in media, technology and user behaviour. Highly relevant, hyper-personal, socially integrated, context driven mobile experiences for consumers and unrivalled contextual consumer data for commercial organisations.

A while ago we pitched a project loosely called In Tune at the BBC Radio One Connected Studio which we felt was very credible but unfortunately the judges disagreed. Maybe it was the way we pitched it but there was a lot of doubt we had the data to do what we planning to do.

I have seen first hand the data points and been amazed at what patterns of activity our music listening can reveal about ourselves. Imagine what you could do if you were have access to that data and could engineer the music and therefore the experience?

Interestingly Google is getting in on the idea as they recently bought Songza.

Storytelling through different types of frames

As part of my investigations into Perceptive Media, myself and other colleagues are deconstructing storytelling down to its most logical parts. Part of this is understanding the history of storytelling and other aspects of storytelling which are outside the mainstream consciousness.

It was the other day I spent extra long in the shower listening to NPR’s TED radio hour, as it was all about stories.

In this hour, TED speakers explore the art of storytelling — and how good stories have the power to transform our perceptions of the world.

The one which struck a chord with me was Chimamanda Adichie’s TED talk on the dangers of the single story. Chimamanda gives a great example to start.

So I grew up in a small university town in Nigeria, and started reading quite early. And I read a lot of British children’s books, which was not unusual. This was the norm for children like me. And so when I started to write, I was writing exactly those stories. All my characters were white and blue-eyed. They played in the snow, they ate apples, and they talked a lot about the weather – how lovely it was that the sun had come out. Now this, despite the fact that I lived in Nigeria, I had never been outside Nigeria. We didn’t have snow, we ate mangoes, and we never talked about the weather because there was no need to. My characters also drank a lot of ginger beer, never mind that I had no idea what ginger beer was. And for many years afterwards, I would have a desperate desire to taste ginger beer.

In Chimamanda’s own words

What this demonstrates, I think, is how impressionable and vulnerable we are in the face of a story, particularly as children. Because all I had read were books in which characters were foreign, I had become convinced that books, by their very nature, had to have foreigners in them and had to be about things with which I could not personally identify.

The power of the story is that powerful. And I conclude listening to her talk and the other TED talks on the show. Mass publishing/broadcasting is partly to blame for this.

Of course in my usual way, I wonder could Perceptive Media could/would make this situation better? I believe so, but how?

In this case, personalisation could be a good thing. Yes the fears of echo chambers and filter bubbles, have to be wary of but on the other hand a well written story is adaptable to almost any culture. Its the inflexibility of the medium which is causing African women to grow up thinking white blue eyed ginger beer drinking kids are part and parcel of the medium. Yes you can point the finger at globalization but its deeper than that. Its inherent to the medium of publishing and broadcasting… in my honest opinion.

If Perceptive Media can remove or even dislodge the dangers of the single story, I would be very happy. As Chimamanda finishes her talk saying…

Stories matter. Many stories matter. Stories have been used to dispossess and to malign, but stories can also be used to empower and to humanize. Stories can break the dignity of a people, but stories can also repair that broken dignity.

BBC Radio 4 Character Invasion day with Perceptive Futures

BBC Radio 4 are putting on a number of events on Saturday 29th March under the banner of Character Invasion day

Character Invasion is a celebration of character taking place over the course of one day – Saturday 29 March 2014. The day will have character at its heart combining an on-air exploration of the importance of character on BBC Radio 4, with a day of public events at all BBC sites which produce Radio Drama.

BBC R&D are involved and focusing on the idea of characters in the future. What are the possibilities for characters in a future which looks more perceptive? Of course we’re not alone, there will be some other key people from across the industry debating the question too.

At this session, you’ll hear from a panel of fantastic guests including Adrian Hon (CEO, Six to Start and Technology writer for the Telegraph), Julius Amedume (film director, writer and producer), Sarah Glenister (author of Perceptive Media’s Breaking Out), Henry Swindell (Development Producer for BBC Writersroom) and Anna Frew (PhD student studying the book and narrative in the new era of the internet).

With a line up like that, you know your going to get some great debate, plus there might be a chance to see/hear the Perceptive Radio in action.

The sign up process is a little weird, so you need to register by Thursday 13th March and then you will be notified later if you got a ticket or not. The event takes place in Media City, Salford Quays.

Hope to see you there… its going to be a blast

Built in Filter and Algorthm failure

I enjoyed Jon Udell’s thoughts on Filter Failure.

The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.

One of the questions people have when they think about Perceptive Media is the Filter bubble.

filter bubble is a result state in which a website algorithm selectively guesses what information a user would like to see based on information about the user (such as location, past click behaviour and search history) and, as a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. Prime examples are Google‘s personalised search results and Facebook‘s personalised news stream. The term was coined by internet activist Eli Pariser in his book by the same name; according to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble.

The filter bubble is still being heavily debated to if its real or not but the idea of filters which get things wrong to add a level of serendipity sounds good. But I do wonder if people will be happy with a level of fuzziness in the algorithms they become dependable on?

I’m always on the lookout for ways to defeat the filters and see things through lenses other than my own. On Facebook, for example, I stay connected to people with whom I profoundly disagree. As a tourist of other people’s echo chambers I gain perspective on my native echo chamber. Facebook doesn’t discourage this tourism, but it doesn’t actively encourage it either.

The way Jon Udell is defeating the filters, he retains some kind of control. Its a nice way to get a balance, but as someone who only follows 200ish people on Twitter and don’t look at Facebook much, I actively like to remove the noise from my bubble.

As I think back on the evolution of social media I recall a few moments when my filters did “fail” in ways that delivered the kinds of surprises I value. Napster was the first. When you found a tune on Napster you could also explore the library of the person who shared that tune. That person had no idea who I was or what I’d like. By way of a tune we randomly shared in common I found many delightful surprises. I don’t have that experience on Pandora today.

Likewise the early blogosophere. I built my echo chamber there by following people whose lenses on the world complemented mine. For us the common thread was Net tech. But anything could and did appear in the feeds we shared directly with one another. Again there were many delightful surprises.

Oh yes I remember spending hours in Easy Everything internet cafes after work or going out checking out users library’s, not really recognizing the name and listening to see if I liked it. Jon may not admit it but I found the dark net provides some very interesting parallels with this. Looking through what else someone shared can be a real delight when you strike upon something unheard of.

And likewise the blogosphere can lead you down some interesting paths. Take my blog for example, some people read it because of my interest in Technology, but the next post may be something to do with dating or life experience.

I do want some filter failure but I want to be in control of when really… And I think thats the point Jon is getting at…

want my filters to fail, and I want dials that control the degrees and kinds of failures.

Where that statement leaves the concept of pure Perceptive Media, who knows…? But its certainly something I’ve been considering for a long while.

Reminds me of that old saying… Its not a bug, its a feature

Perceptive learning resources

Future of StoryTelling

For the last few Wednesdays I have been watching the Future of StoryTelling hangouts online. I first heard about them from Matt Locke and Frank Rose last year when I gatecrashed a planned hangout with Perceptive Radio.

The Future of StoryTelling speaker Hangout series continues on Wednesday, January 15th, with a discussion about interactive gaming, and how great entertainment can transport you from your daily life and immerse you in another world.

You can watch the whole thing here on youtube. and last weeks with Google creative labs Robert Wong. This weeks Including my question which is based off my noticing, interaction and narrative keeps getting thrown around together when they are quite different things.

The guest this week was Microsoft’s Shannon Loftis, General Manager at Xbox Entertainment Studios. She said a lot of things I agreed with but switching narrative for interactive, paused me to think about the origins of Perceptive Media.

I’m not going to say Games and interactive experiences are not storytelling. I would be very wrong, but what I’m surprised at is Microsoft have this amazing device with cutting edge sensors and they sound like they are doing some perception. But they are only using it for Games? Shannon even talks about the golden age of Television then slides off into Games again.

Real shame…

Anyway there was a question asking about what this all can mean for children. Most of the guests give some answers which I couldn’t disagree with but Charles Melcher (founder of future of storytelling) jumps in with something quite profound.

I clipped it and put it on Archive.org but its something I’ve been thinking about since the early days of perceptive media.

The beauty of media which adapts, responds or as I prefer preconceives the audience and the context. Is it can unfold one way and unfold another way for someone else. Like Charles, I’m dyslexic and sometimes just can’t get my head around learning resources which are written for a majority of people.

I understand why its been that way. The cost of creating multiple versions of a learning resource is going to be a bad idea from a resourcing idea. But that only applies if you build your resources in a solid non-flexible way (like a blob) your going to run into the same problem described.  However if you have something more fluid (generative) or object based you can change aspects on the fly.

Simple example, a Book (any book) vs a Ereader (like a Kindle). I’m sure I’ve talked about this before but line lengths is a common issue with people who are dyslexic. We tend to loose what line we’re on for a split second.

I can reshape the lines lengths to make it more readable for myself (thats interactive). An Ereader with sensors could follow my eyes patterns and reshape the line lengths and fonts to give me the best reading experience (now thats perceptive). This all works because the text is digital and therefore an object which can be manipulated.

Back to Charles, a resource which can be manipulated by a person is good but one which can be manipulated by a process of data and sensors is even better (if they are working to aid you). Combining/aggregating resources together gets you to a position where you can weave a story together. I won’t bore you with my campfire == perceptive media equals and this is what humans do thoughts. But I do feel this is the future of storytelling. Charles vision is achievable and its something I’d love to talk to BBC Learning about in more depth.

I’ll be honest and say not only has this one got me writing but I also started writing after hearing Robert Wong talking last week about leadership and inspiring people.

Let Her… talk to you

Her.

A lonely writer develops an unlikely relationship with his newly purchased operating system that’s designed to meet his every need.

This is a really good film. Some parts are funny and some parts are tragic. But this isn’t a review of a really good film but rather a look at the technology in the film her. There might be some mild spoilers and I would recommend not reading till you’ve seen it in full.

When I first heard about Her, I thought oh no here comes another S1mOne. Don’t get me wrong S1mOne is ok but gets a little silly in parts. Her on the other hand is smart and although it does go towards the obvious, it pulls back and finds a new more interesting path.

Adrian sent me a link to wired’s piece about the UI design in her.

A few weeks into the making of Her, Spike Jonze’s new flick about romance in the age of artificial intelligence, the director had something of a breakthrough. After poring over the work of Ray Kurzweil and other futurists trying to figure out how, exactly, his artificially intelligent female lead should operate, Jonze arrived at a critical insight: Her, he realised, isn’t a movie about technology. It’s a movie about people. With that, the film took shape. Sure, it takes place in the future, but what it’s really concerned with are human relationships, as fragile and complicated as they’ve been from the start.

The film is certainly about people and our relationships in the age of artificial intelligence. Reminds me very much of the book which imran gifted me which I’ve still not read completely, love in the age of algorithms.

But whats really interesting is the simplicity of the technology. Pretty much every interaction is with voice. There’s little interaction with screens, although there are giant screens in some of the shots. Even the camera which the main character uses looks underwhelming simple. I can only suggest in the near future we started to solve the power/battery problems of today.

We decided that the movie wasn’t about technology, or if it was, that the technology should be invisible,” he says. “And not invisible like a piece of glass.” Technology hasn’t disappeared, in other words. It’s dissolved into everyday life.

Here’s another way of putting it. It’s not just that Her, the movie, is focused on people. It also shows us a future where technology is more people-centric. The world Her shows us is one where the technology has receded, or one where we’ve let it recede. It’s a world where the pendulum has swung back the other direction, where a new generation of designers and consumers have accepted that technology isn’t an end in itself-that it’s the real world we’re supposed to be connecting to.

I think Wired is right, the movie is a total U turn on the likes of Minority Report and Blade Runner. There is a great scene where our main character is lying on the grass in a field. He’s talking to the AI like she is lying right next to him. The cinematography actually applies it from the camera angle.

The technology is there but it feels like that Internet of things dream, the technology is embedded everywhere. Not the Google Glass style future. something much closer to ubiquitous…

All of these things contribute to a compelling, cohesive vision of the future — one that’s dramatically different from what we usually see in these types of movies. You could say that Her is, in fact, a counterpoint to that prevailing vision of the future — the anti-Minority Report. Imagining its world wasn’t about heaping new technology on society as we know it today. It was looking at those places where technology could fade into the background, integrate more seamlessly.

After that Wired goes into depth about the User Interface being vocal and how its a perfect fit for the cinema. I don’t disagree but its only one of many types of User Interfaces which can be available. I do agree its a nice depart from touch interfaces which is in most films.

But the AI isn’t simply voice alone (this has been done many times in cinema too), its context sensitive, its perceptive! This is what brings the sense of magic to the exchanges. The AI seems like she is there talking and taking it all in. All those subtle gestures, human expressions, etc. They are all taken into account, making the AI seem very human.

…we’re already making progress down this path. In something as simple as a responsive web layout or iOS 7′s “Do Not Disturb” feature, we’re starting to see designs that are more perceptive about the real world context surrounding them-where or how or when they’re being used. Google Now and other types of predictive software are ushering in a new era of more personalised, more intelligent apps.

Arthur C. Clarke said…

Any sufficiently advanced technology is indistinguishable from magic.

Her is does have a magic quality, its not the best film I’ve seen this year but its one which I do think will cause a trend showcasing different user interfaces in movies, instead of defaulting to the usual push/pull/touch interfaces.

Its well worth watching and enjoying, just don’t think about S1m0ne beforehand.

The perceptive media moon shot

Again and again I hear people ask the question. What is the moon shot?

Usually its related to a project idea, and they are asking for the trajectory target?

Well with the lens on Perceptive Media, which I believe is the future of storytelling. The kinds of storytelling which touches and engages at such a deep level. Canus said life should be lived to the edge of tears (I assume happy & sad).

The moon shot is the lucid dream

“Immersive works of art or entertainment are increasingly not content to simply produce a new range of sensations. Instead, they often function as portals into “other worlds.” Erik Davis

Context is queen?

I wanted my grandmothers pokerface....

I’m hearing a lot of talk about how 2013 is The year responsive design starts to get weird… or rather how its going to be all about responsive design (what happened to adaptive designing who knows)

Think it’s hard to adapt your content to mobile, tablet, and desktop? Just wait until you have to ask how this will also look on the smart TV. Or the refrigerator door. Or on the bathroom mirror.

Or on a user’s eye.

They’re all coming…if they aren’t already here. It doesn’t take much imagination or deep reading of the tech press to know that in 2013 more and more devices will connect to the internet and become another way for people to consume internets.

We’ll see the first versions of Google’s Project Glass in 2013. A set of smart glasses will put the internet on a user’s eyes for the first time. Reaction to early sneak peeks is a mix of mockery and amazement, mostly depending on your propensity for tech lust. We don’t know much about them, other than some tantalizing video, but Google is making them, so it’s a safe bet that Chrome For Your Eyes will be in there. And that means some news organization in 2013 is going to ask: “How does this look jammed right into a user’s eyeballs?”

Stop! Nieman labs is forgetting something major! And I could argue they are still thinking in a publishing/broadcasting mindset

Yes the C word, Context…

Ironically this is something Robert Scoble actually gets in his blog post, The coming automatic, freaky, contextual world and why we’re writing a book about it.

A TV guide that shows you stuff to watch. Automatically. Based on who you are. A contextual system that watches Gmail and Google Calendar and tells you stuff that it learns. A photo app that sends photos to each other automatically if you photograph them together. And then there’s the Google Glasses (AKA Project Glass) that will tell you stuff about your world before you knew you needed to know. There is a new toy coming this Christmas that will entertain your kids and change depending on the context they are in (it will know it’s a rainy day, for instance, and will change their behavior accordingly)

Context is whats missing and in the mindset of pushing content around (broadcast and publishing) and into peoples faces, responsive design sounds like a good idea. Soon as you add context to the mix, it doesn’t sound so great. Actually it sounds damm right annoying or even intrusive? I do understand its the best we got right now, but as sensors become more common, we’ll finally be able to understand context and hopefully be able to build perceptive systems.

We already demonstrated, sensors don’t have to be cameras, gyroscopes, etc. The referral, operating system, screen resolution, cookies, etc all are bits of data which can (some maybe less that others) be used to understand the context.

I can come up with many scenarios where the responsive part gets in the way, unless you are also considering the context. In a few years time, we’ll look back at this period of time and laugh, wondering what the heck were we thinking…

I’m with Scoble on this one… Context and Content are the Queen and King.