Perceptive media meets the visual

visual pm realtime grading
Changing the colour grade

The next web broke the story after seeing a tweet from BBCRD on Thursday but others have followed.

So what is this visual perceptive media thing?

Imagine a world where the narrative, background music, colour grading and general feel of a drama is shaped in real time to suit your personality. This is called Visual Perceptive Media and we are making it now in our lab in MediaCityUK.

The ability to customise or even personalise media (video in this case) in a browser using no special back end technology or delivery mechanism is fascinating. Its all javascript, client side technologies and  standard http in a modern web browser. Because of this it’s open, not propriety and I believe scalable (the way it should be). This also means when we do make it public the most amount of people can experience it, fitting with the BBC’s public purpose.

More details of the project will emerge soon, but I wanted to make certain things clear.

The project isn’t a one off, it a line of projects around our object media ambitions. Some others were used at Edinburgh this summer and IP studio is a big part of it. There’s even been some projects very similar to Visual Perceptive Media including Forecaster.

Perceptive Media (implicit) has always been about audience experiences and fits as an alternative of responsive media (explicit). Breaking out and Perceptive radio. All are new experiences we have been building, underpinned by ip technology and rethinking our notions of media as a solid monolithic block.

Lego Bricks

You are already seeing this happen with the movement in STEMS in music. However, while audio multiplication in the open environment of the web is easier via the WebAudioAPI. Theres no real unified API like the WebAudioAPI for Video. SMIL was that but it got sidelined as HTML5 pushed the capabilities in the browsers not mediaplayers.

We have been working in this area and looked at many options including Popcorn.JS. In the end we started creating a video compositor library and recently open sourced the library. Without that library, the project would be still be in our lab.

There has been some criticism about the personality side of things.

Data ethics is something we have been thinking and talking about a lot. Earlier this year we created a microsite summing up some of our thoughts and raising opinions of some industry experts. The question about the filter bubble was talked about my many but we didn’t include it in the short documentaries, maybe now would be a good time to dig them out.

But before I dive into the deep end, its important to say we are using personality as simply a proxy for changing things. It could have been anything, as someone even suggested we could used shoe size. We used personality after meeting and being impressed by Preceptiv a long while ago by their technology.

The next thing was to connect the data to changeable aspects of a film. Film makers are very good at this and working with Julius Amedume (film director and writer) we explored the links between personality and effect. Colour grade and music were key ones along with shot choices, we felt were most achievable.

Theres a lot more I can say, most which was said at the This way up conference panel: The film is not enough.

On the day before (Wednesday) we did our first somewhat public but secretive closed door reveal of the very much early preview of visual perceptive media with 16 industry people. It originally was meant to be a smaller number but the demand was such that we increased the number and increased the machines needed to view it. The technical challenges did cause problems but with the help of Anna from AND Festival, myself and Andy from R&D got some good feedback. We are still crunching the feedback but I expect the frank discussions will be the most enlightening.

The panel discussion on Thursday was great. I gave the following presentation after Gabby asked me to give more context to the video here. I was my usual firestarter self and maybe caused people to think quite a bit. The trend towards events around film is welcomed and there are some great people doing amazing things but I was questioning film its self. We should demand more from the media of film…

Some of the feedback afterwards was quite amazing. I had everything from “This will not work!” – spent 15 productive mins  talking with one person about this. To in-depth questioning of what we have done so far and how, revealed nothing.

I had a good chuckle at this tweet and must remember to bring it up at my next appraisal.

I generally don’t want to say too much because the research should speak for its self but its certainly got people thinking, talking and hopefully more of the BBC R&D project around object media will start to complete the picture of what’s possible and show the incredible value the BBC brings to the UK.

https://twitter.com/AndyRae_/status/672436090389794816

Variations not versions

https://twitter.com/martynkelly/status/624266599000838150

It was Si Lumb who tweeted me about Pixar’s Inside Out contextual visuals.

Now I know this isn’t anything new, I mean films have had region differences for a long while but its good to see it discussed openly and I was interesting to read about how (we think) they do it.

It’s interesting to note that the bottom five entries of the list, starting with “Thai Food,” remain consistent throughout (maybe Disney/Marvel Studios’ digital wizards couldn’t replace the stuff that Chris Evans’ hand passed over), but the top items change a lot.

Which leads me to think its all done in post production using things like impossible software?

Post producing this stuff is a mistake in my mind, but then again I’m working on the future of this kind of thing with Perceptive Media. I also imagine the writer and director had no time to think about variations for different countries, or wasn’t paid enough?

Rather than write up my thoughts of how to do this with digital cinema (isn’t this part of the promise of digital cinema?) plus I’m writing a paper with Anna frew about this. I thought it was about time I wrote something about the project I’m currently working on.

Visual Perceptive Media

Visual perceptive media is a short film which changes based on the person who is watching the video. It uses profiled data from a phone application to build a profile of the user via their music collection and some basic questions. The data then is used to inform what variations it should apply to the media when watched.

The variations are applied in real time and include different music, different colour grading, different video assets effects and much more. Were using the WebAudioAPI, WebGL and other open web technologies.

What makes this different or unique…?

  • We had buy in with the script writer and director (Julius Amedume was both and amazing) right from the very start which makes a massive difference. The scripts were written with all this in mind.
  • It was shot and edited with its intended purpose of making real-time variations.
  • Most things we (BBC R&D) have done in the responsive/perceptive area has been audio based and this I would say is a bit of moonshot moment like Breaking Out 3 years ago! Just what I feel the BBC should be doing.
  • Keeping with the core principle of Perceptive media, the app which Manchester based startup Percepiv (was moment.us, wondered if working with us had a hand in the name change?) created using there own very related technology. Is mainly using implicit data to build the profile. You can check out music+personality on your own android and iphone now.

Its going to be very cool and I believe we the  technology has gotten to the point where it makes sense that we can do this so seamlessly that people won’t even know or realise (this is something we will be testing in our lab). As Brian McHarg says, theres going to be some interesting water cooler conversations, but the slight variations are going to be even more subtle and interesting.

This is no branching narrative

I have been using the word variations throughout this post because I really want us to get away from the notion of edits or versions. I recently had the joy of going Learn Do, Share Warsaw. I was thinking about how to explain what our thinking was with the Visual Perceptive Media project. How do you explain which has 2 films genres with 6 established endings with 20+ types music genres and a endless number of lengths and effects?

This certainly isn’t a branching narrative and the idea of branching narrative is certainly not apt here. If this was a branching narrative, it would have upwards of 240 versions not including any of the more subtle effects to increase your viewing enjoyment. I considered them as variations and the language works, when you consider the photoshop variation tool. This was very handy when talking to others not so familiar with perceptive media.  But its only a step and makes you consider there might be editions…

I was talking to my manager Phil about it before heading to Warsaw and came up with something closer to the tesseract/hypercube in interstellar (if you not seen it/spoiler alert!)

Unlimited Variations

Unlimited isn’t quite right but the notion of time and variations which intersect is much closer to the idea. I say to Si Lumb maybe the way to show this would be in VR, as I certainly can’t visualise it easily.

When its up and running I’d love people to have a go and get some serious feedback.

On a loosely related subject, Tony Churnside also tweeted me about Perceptive Media breaking into the advertising industry.