Variations not versions

https://twitter.com/martynkelly/status/624266599000838150

It was Si Lumb who tweeted me about Pixar’s Inside Out contextual visuals.

Now I know this isn’t anything new, I mean films have had region differences for a long while but its good to see it discussed openly and I was interesting to read about how (we think) they do it.

It’s interesting to note that the bottom five entries of the list, starting with “Thai Food,” remain consistent throughout (maybe Disney/Marvel Studios’ digital wizards couldn’t replace the stuff that Chris Evans’ hand passed over), but the top items change a lot.

Which leads me to think its all done in post production using things like impossible software?

Post producing this stuff is a mistake in my mind, but then again I’m working on the future of this kind of thing with Perceptive Media. I also imagine the writer and director had no time to think about variations for different countries, or wasn’t paid enough?

Rather than write up my thoughts of how to do this with digital cinema (isn’t this part of the promise of digital cinema?) plus I’m writing a paper with Anna frew about this. I thought it was about time I wrote something about the project I’m currently working on.

Visual Perceptive Media

Visual perceptive media is a short film which changes based on the person who is watching the video. It uses profiled data from a phone application to build a profile of the user via their music collection and some basic questions. The data then is used to inform what variations it should apply to the media when watched.

The variations are applied in real time and include different music, different colour grading, different video assets effects and much more. Were using the WebAudioAPI, WebGL and other open web technologies.

What makes this different or unique…?

  • We had buy in with the script writer and director (Julius Amedume was both and amazing) right from the very start which makes a massive difference. The scripts were written with all this in mind.
  • It was shot and edited with its intended purpose of making real-time variations.
  • Most things we (BBC R&D) have done in the responsive/perceptive area has been audio based and this I would say is a bit of moonshot moment like Breaking Out 3 years ago! Just what I feel the BBC should be doing.
  • Keeping with the core principle of Perceptive media, the app which Manchester based startup Percepiv (was moment.us, wondered if working with us had a hand in the name change?) created using there own very related technology. Is mainly using implicit data to build the profile. You can check out music+personality on your own android and iphone now.

Its going to be very cool and I believe we the  technology has gotten to the point where it makes sense that we can do this so seamlessly that people won’t even know or realise (this is something we will be testing in our lab). As Brian McHarg says, theres going to be some interesting water cooler conversations, but the slight variations are going to be even more subtle and interesting.

This is no branching narrative

I have been using the word variations throughout this post because I really want us to get away from the notion of edits or versions. I recently had the joy of going Learn Do, Share Warsaw. I was thinking about how to explain what our thinking was with the Visual Perceptive Media project. How do you explain which has 2 films genres with 6 established endings with 20+ types music genres and a endless number of lengths and effects?

This certainly isn’t a branching narrative and the idea of branching narrative is certainly not apt here. If this was a branching narrative, it would have upwards of 240 versions not including any of the more subtle effects to increase your viewing enjoyment. I considered them as variations and the language works, when you consider the photoshop variation tool. This was very handy when talking to others not so familiar with perceptive media.  But its only a step and makes you consider there might be editions…

I was talking to my manager Phil about it before heading to Warsaw and came up with something closer to the tesseract/hypercube in interstellar (if you not seen it/spoiler alert!)

Unlimited Variations

Unlimited isn’t quite right but the notion of time and variations which intersect is much closer to the idea. I say to Si Lumb maybe the way to show this would be in VR, as I certainly can’t visualise it easily.

When its up and running I’d love people to have a go and get some serious feedback.

On a loosely related subject, Tony Churnside also tweeted me about Perceptive Media breaking into the advertising industry.

Perceptive Media in Wired UK’s Top Tech for 2013

Perceptive Media in Wired Magazine

Someone from the BBC’s Future Media PR pointed out to me that I was in the latest issue of Wired UK. The whole thing isn’t online yet but I’ve made a manual copy (thanks to Laura Sharpe for buying the ipad version on my behalf)… Till its up online

Advertising Displays, Television and consoles are hooking up with recognition software to second-guess our hidden desires. By Ed White

Televisions, computers and retail displays are increasingly watching us as much as we’re watching them. They are likely to be the catalyst for a shift from mass to personalised media. Broadcastsers, game developers and tech companies have long dreamt of knowing who’s watching, and then making content relevant to each viewer.

Cheap cameras and sensors are making “perceptive media” a reality. First was Microsoft, whose Xbox gaming peripheral Kinect, launched in 2010, has put a perceptive-media device into more that 18 million homes worldwide. By linking people to their Xbox Live identity using facial recognition, it has made the gaming experience more tailored. But perceptive media is wider than gaming. Over two years, Japan Railways’ East Japan Water Business has installed about 500 intelligent vending machines that recognise customers’ age and gender via sensors and suggest drinks accordingly. Intel’s Audience Impression Metrics suite (Aim) users data captured by cameras on displays in shops to suggest products. Kraft and Adidas are early adopters. The software will also monitor responses to improve brands’ marketing.

But the real winner will be the entertainment industry. Samsung and Lenovo announced at the 2012 Consumer Electronics Show that their new TVs will recognise a viewer by using a camera incorporated into the set, and bring up their favourite programmes; Intel is working on a set-top box with similar  capabilities. Face tracking software is also making our screens more intuitive. Japanese broadcaster NHK is experimenting with emotion-recongnition software which can suggest, say a more exciting TV show if it detects boredom. But where perceptive media gets really exciting is in using viewer data to change narratives in real time. US-based video game company Valve software is experimenting with biofeedback systems, measuring physiological signals such as heart rate and pupil dilation in players of Portal 2 and Alien Swarm. If the zombies aren’t making you sweat, the AI director can send in more. And television may follow, believes Ian Forrester senior producer at BBC R&D. Sensors in your TV would pick up who’s in the room and subtly change the programmes’s details, live: for example the soundtrack could be based on your Spotify favorites.

If that sounds Big Brother-ish, that’s because it is. Perceptive media’s biggest hurdle will be privacy. But advocates such as Daniel Stein, founder CEO of San Francisco based digital agency EVB, say that if brands can prove the value of data sharing, they’ll win people over. Here’s looking at you.

Ed White is a senior writer and consultant at contagious communications, a London-based marketing consultancy

 

Perceptive media in wired magazine

The making of Perceptive Media’s Breaking Out

I have been talking about Perceptive Media to many many people. Some get it some don’t… Everytime I try and explain it I use my perception to work out what method would work for them to understand it. When I did the talk at Canvas Conf way back in September I wanted to go into real depth about what we had done, but I had to explain the concept which takes a long while.

However now we got enough feedback, its time to reveal what we done to make it work. Theres a blog post coming soon on the BBC R&D blog but till then… Happyworm have done a excellent blog post explaining the whole thing down to some serious detail, including how to reveal the secret easter egg/control panel!

How to open the easter egg

To open the Easter Egg, Breaking Out must have finished loading and then click under the last 2 of the copyright 2012 on the bottom right. You’ll then have access to the Control Panel.

The easter egg, really unlocks the power of Perceptive Media like never before.

Everything is controllable and the amount of options is insane but all possible with the power of object based audio (the driving force behind perceptive media).

Breaking Out Control Panel

Practically just changing the fade between foreground and background objects can be a massive accessibility aid for those hard of hearing or in a noisy environment like driving a car? Tony Churnside is working on the advantages of object based audio so i won’t even try coming with conclusions on whats possible but lets say, the whole turning your sound system up and down to hear the dialogue could be removed with Perceptive media. Because of course perceptive media isn’t just the objects and delivering the objects, its also the feedback and sensor mechanisms.

Mark Panaghiston writes in conclusion…

The Web Audio API satisfied the goals of the project very well, allowing the entire production to be assembled in the client browser. This enabled control over the track timing, volume and environment acoustics by the client. From an editing point of view, this allowed the default values to be chosen easily by the editor and have them apply seamlessly to the entire production, similar to when working in the studio.

Web Audio API was amazing… and we timed it just about right. At the start of the year, it would not have worked in any other browser except Chrome. But every few months we saw other browsers catch up in the WebAudioAPI front and I’m happy to say the experiment kinda of works on Firefox and Opera.

One of the most complicated parts of the the project was arranging the asset timelines into their absolute timings. We wanted the input system to be relative since that is a natural way to do things, “Play B after A”, rather than, “Play A at 15.2 seconds and B at 21.4 seconds.” However, once the numbers were crunched, the noteOn method would easy queue up the sounds in the future.

The main deficiency we found with the Web Audio API was that there were no events that we could use to know when, for example, a sound started playing. We believe this is in part due to it being known when that event would occur, since we did tell it to noteOn in 180 seconds time, but it would be nice to have an event occur when it started and maybe when its buffer emptied too. Since we wanted some artwork to display relative to the storyline, we had to use timeouts to generate these events. They did seem to work fine for the most part, but having hundreds of timeouts waiting to happen is generally not a good thing.

Yes ideally we would want to be able to turn a written script into a Javascript file complete with timings. Its something which would make perceptive media a lot more accessible to narrative writers.

And finally, the geo-location information was somewhat limited. We had to make it specific to the UK simply because online services were either expensive or heavily biased towards sponsored companies. For example, ask for the local attractions and get back a bunch of fast food restaurants. But in practice though, you’d need to pay for a service such as this and this project did not have the budget.

Yes that was one of the limiting factors which we had to do for cost. And because of that we couldn’t shout about it from the roof tops to the world. However the next experiment/prototype will be usable worldwide, just so we can talk about perceptive media on a global stage if needed

As Harriet said, “OK, I can do this.” And we did!

Yes we did! and we proved Perceptive Media can work and what a fine achievement it is! This is why I can’t shut up about Perceptive Media. When ever we talk about the clash of interactivity and narrative I can’t help but pipe up about Perceptive Media, and why not? It could be the next big thing and I have to thanks James Barrett for coming up with the name after I had originally called it the less friendly Intrusive Media.

Not only did we prove that but it also proved that things off the work plan in R&D can be as valid as things on it. And finally that the ideology of looking at whats happening on the darknet, understanding it and thinking about how it can scale has also been proven…

I love my job and love what I do…

Happyworm were a joy to work with and the final prototype was not only amazing but they also believed into the ideals of open sourcing the code so others can learn, understand and improve on it. You should Download Perceptive Media at GitHub and have a play if you’ve not done so yet… what you waiting for?

Experience Perceptive Media yourself

Starting bird

Following up from my posts (here) (here) (here and also now here) about Perceptive Media… I’m very proud to announce Breaking Out, our BBC R&D experiment into new editorial formats.

The prototype requires performs best on Chrome using the new WebAudioAPI, but does work in Firefox, Opera and Safari through a fallback solution (this will eat your bandwidth as it uses WAV’s rather than compressed audio like Ogg Vorbis and Mpeg3). I would suggest keeping memory and cpu intensive applications shut while running the demo because there some serious calculations happening client side.

But most of all I’d urge everyone to leave feedback, no matter how bad or good it is… Then share it around for other people to hear and experience.

Massive thanks to everyone involved in the project…

Writer: Sarah Glenister
Harriet: Maeve Larkin
Lift operator: Anthony Churnside
Producers: Anthony Churnside and Ian Forrester
Media Engine code: Happy Worm
Jplayer Audio engine: Happy Worm
Website code: Yameen Rasul and Matthew Brooks
Illustrations: Angie Chan
With special thanks: Sharon Sephton, Henry R Swindell, Maxine Glancy, Usman Mullan, Elizabeth Valentine and the BBC Writers Room

I’m also happy to say we will be making the Media Engine code available under the Apache License for all you guys who want to hack around with the concept yourself.

Actually there might be some easter eggs in the audio drama to find for those not interested in getting all dirty in the actual code.

A Perceptive on storytelling

As most of you know BBC R&D have a demo of Perceptive Media which we’ve shown a few places including the EBU in Copenhagen. Its been a hidden gem for a long while and its been amazing to see what people have had to say about the concept of perceptive media. Specially liked the two Brits sitting on the sofa talking about it.

We’re really hoping as many people will enjoy it and give their honest feedback to us (good and bad). But its not just the individual  feedback we would like to research, its the interconnected stories of how people tell others about it and how they explain it to each other…

How memes spread has always been high on my list of loves and to be honest should be high on the BBC’s research lists (if its not already?) In actual fact there is something about how memes spread and attribution which I think is very interesting and could be a new business model into the future.

Anyway… expect much more about Perceptive Media on the BBC R&D blog this month. In actual fact if you want to be first hear it and respond directly to people behind it like myself, the script writer, actress, coders, etc… Then you should make your way to the next Social Media Manchester.

I was reading about the domino effect on my Kindle via Instapaper the other day on the London Tube prompted after reading this tearjerker story. This bit really got into stuck in my throat, further proving that I’m just a sucker and massive romantic…

At the end was this bit…

Here’s the power of a story: someone hands me one, like a gift (I imagine it wrapped in shiny paper with the bow, the handmade letterpress card, the whole nine yards), and in that gift, I find parts of myself that have been missing, parts of our world that I never imagined, and aspects of this life that I’m challenged to further examine. Then—and this is the important part, the money shot, if you will—I take that gift and share it. In my own writing, sure, but the kind of sharing I’m talking about here is the domino effect: how I hear/watch/read a story, and then tell everybody and their mother about it, and then they tell everybody and their mother, and somewhere in that long line of people is someone who, at this exact point in their life, needed its message more than we’ll ever know.

The power of a story indeed…

You could look at this as a example of why Perceptive media isn’t going to work but actually I disagree. Someone (out there) has written a story which perfectly suits the medium but they don’t know it yet.

Perceptive Media presentation at the EBU, Copenhagen

Copenhagen

I really enjoyed my time in Copenhagen… It kind of reminded me of a combination of Berlin, Amsterdam and Stockholm. I had wished I had more time there for many reasons.

Copenhagen Archtecture

So what was I doing there? Well me and Tony Churnside were asked a while after that presentation at SMC_MCR by the European Broadcasting Union (they run the Eurovision song contest I’ve heard) if we would offer a unique look at a possible future for broadcasting. Originally we said no because the idea wasn’t fully formed (hence the early thinking) but it became clear we might have a demo which we could maybe show. That demo of course is still under-wraps and we hope to reveal it to the world soon enough (keep an eye on the BBC R&D Blog for more details). It was well received and it certainly got people thinking, talking and wanting much more. And yes it is Perceptive Media

EBU campfire reference

On top of doing the presentation and heading up a question and answer session with Tony, we got a good chance to see the rest of the summit and speak to many TV related people. Its amazing what our European public broadcaster friends are doing. Thanks to the always busy but super smart Nicoletta Iacobacci (who also uses troublemaker in her job title) from the EBU who was the one who invited us and made us feel very comfortable. Of course the amazing Mia Munck Bruns, who I had the joy of sharing my love for good cocktails with on the last night.

Cocktails in Copenhagen

We also got to see parts of Copenhagen but we were mainly in Ørestad. I could only see the one of the moutain dwellings  as talked about in the Channel4 documentary recently from a far, but it looked very impressive. And to be honest the architecture and design effort in Copenhagen was amazing… It was like walking through Stockholm or the pages of Inhabatit.

When we first arrived (our flight was 2 hours delayed) I had a massive headcold and couldn’t hear out of my right ear due to not flying well. But we walked straight into a session involving media study students and TV producers. It was run by Nicoletta and reminded me of when BBC Backstage invited the people behind UK Nova in to meet the BBC. The students explained there media habits and the TV producers tried to make sense of it all.

The thing which shocked me was the lack of twitter usage in Denmark. The students talked about using Facebook as we use Twitter. Google Plus never really came up ever. The 2/4 screens meme came up time and time again. And a few of the TV producers started getting irate why the students were treating TV like radio or as I prefer wallpaper.

They couldn’t understand why they have the screen on if there not watching it. Little disagreement broke out saying they should be watching what they broadcast. Well what followed was some ice words on both sides… As usual, as I’ve heard all my life. The students media habits were dismissed as early adopters. I asked if any of them created there own media rather than just consumed and shared? Very few did (maybe one) further indicating there not early adopters but just the norm.

Fantastic session along with

During the trip the EBU treated us to a series of lovely dinner including one at a amazing Opera Hall.

Opera House

Unfortunately the way to the opera hall was via boat which isn’t exactly great for me. But I made it even with Tonys teasing…

The rest of the conference was dominated by TV as you’d expect and there was some real interesting things from other public European partners including Äkta människor or Real Humans

In a parallel present the artificial human has come into its own. Robots no longer have anything robot-like about them. New technology and advancements in the field of science have made it possible to manufacture a product – a kind of mechanized servant – that is so similar to a real human that it can often be considered a perfectly good substitute. The Human Robot (HUBOT) have also given rise to new problems and dilemmas. Thorny legal questions have increasingly started to occupy people’s minds and are still waiting to be answered: Who is responsible for the actions of a hubot? Do hubots have some form of “hubot rights”? Should they be paid for their work? As an ever growing number of people form relationships with hubots, the boundaries between human and machine become blurred. When humans make copies of themselves, which are so close to the real thing they form emotional bonds, the questions arises – What does it really mean to be ‘human’?

Looks like one to watch for sure…

Great to experience Copenhagen and see that crazy bridge/tunnel to Sweden from the plane.