Peak inside BBC R&D via Facebook

LJ Rich contacted me asking if I was up for an experiment. Of course I said yes, and without really knowing a few weeks later I was roped into taking part in BBC News #24Live stream on Facebook. It was a bit of surprise but an enjoyable one, shame about the technical difficults at the start.
Unfortunately the only way to get the video out Facebook without hacking away at it, is to embed it complete with the javascript code. So enjoy it and flush your cache afterwards if you are not a FB fan.

 

 

We’re back!#24Live NOW: We’re taking an interactive look inside BBC Research and Development. Ever wanted to know what…

Posted by BBC News on Thursday, 7 April 2016

The next web peers around BBC R&D…

Perceptive Radio v2
The second-generation experimental Perceptive Radio hardware. Credit: Martin Bryant / TNW

Its always great to have some of the work in the press, and see which bits they pick up on. But even better is when it gets framed along with other work, such as the ones happening around the same lab or similar fields.

In recent times, Ian Forrester has turned his attention to ‘Visual Perceptive Media.’ As we first reported late last year, this applies the same principles to video-based content.

For the first experiment in Visual Perceptive Media, the BBC worked with a screenwriter who created a short drama with multiple starts and endings. In addition to the variable plot, a number of different soundtracks were prepared, and the video was treated with a range of color gradings to give it different moods, from cold and blue to warm and bright.

Good to see the next web picking up on the effort we put into making all this very open. This comes from before my time at BBC Backstage but it certainly makes things easier to justify with us being a public organisation haven done things like Backstage.

One thing that struck me when talking to the people working on all of these projects was that they were using the Web browser as their canvas and working with free-to-use, open technologies like OpenGL, Web Audio, Twitter Bootstrap and Facebook React.

And what better end than…

Some of the most interesting ideas for how that might happen are coming out of BBC R&D.

Perceptive media meets the visual

visual pm realtime grading
Changing the colour grade

The next web broke the story after seeing a tweet from BBCRD on Thursday but others have followed.

So what is this visual perceptive media thing?

Imagine a world where the narrative, background music, colour grading and general feel of a drama is shaped in real time to suit your personality. This is called Visual Perceptive Media and we are making it now in our lab in MediaCityUK.

The ability to customise or even personalise media (video in this case) in a browser using no special back end technology or delivery mechanism is fascinating. Its all javascript, client side technologies and  standard http in a modern web browser. Because of this it’s open, not propriety and I believe scalable (the way it should be). This also means when we do make it public the most amount of people can experience it, fitting with the BBC’s public purpose.

More details of the project will emerge soon, but I wanted to make certain things clear.

The project isn’t a one off, it a line of projects around our object media ambitions. Some others were used at Edinburgh this summer and IP studio is a big part of it. There’s even been some projects very similar to Visual Perceptive Media including Forecaster.

Perceptive Media (implicit) has always been about audience experiences and fits as an alternative of responsive media (explicit). Breaking out and Perceptive radio. All are new experiences we have been building, underpinned by ip technology and rethinking our notions of media as a solid monolithic block.

Lego Bricks

You are already seeing this happen with the movement in STEMS in music. However, while audio multiplication in the open environment of the web is easier via the WebAudioAPI. Theres no real unified API like the WebAudioAPI for Video. SMIL was that but it got sidelined as HTML5 pushed the capabilities in the browsers not mediaplayers.

We have been working in this area and looked at many options including Popcorn.JS. In the end we started creating a video compositor library and recently open sourced the library. Without that library, the project would be still be in our lab.

There has been some criticism about the personality side of things.

Data ethics is something we have been thinking and talking about a lot. Earlier this year we created a microsite summing up some of our thoughts and raising opinions of some industry experts. The question about the filter bubble was talked about my many but we didn’t include it in the short documentaries, maybe now would be a good time to dig them out.

But before I dive into the deep end, its important to say we are using personality as simply a proxy for changing things. It could have been anything, as someone even suggested we could used shoe size. We used personality after meeting and being impressed by Preceptiv a long while ago by their technology.

The next thing was to connect the data to changeable aspects of a film. Film makers are very good at this and working with Julius Amedume (film director and writer) we explored the links between personality and effect. Colour grade and music were key ones along with shot choices, we felt were most achievable.

Theres a lot more I can say, most which was said at the This way up conference panel: The film is not enough.

On the day before (Wednesday) we did our first somewhat public but secretive closed door reveal of the very much early preview of visual perceptive media with 16 industry people. It originally was meant to be a smaller number but the demand was such that we increased the number and increased the machines needed to view it. The technical challenges did cause problems but with the help of Anna from AND Festival, myself and Andy from R&D got some good feedback. We are still crunching the feedback but I expect the frank discussions will be the most enlightening.

The panel discussion on Thursday was great. I gave the following presentation after Gabby asked me to give more context to the video here. I was my usual firestarter self and maybe caused people to think quite a bit. The trend towards events around film is welcomed and there are some great people doing amazing things but I was questioning film its self. We should demand more from the media of film…

Some of the feedback afterwards was quite amazing. I had everything from “This will not work!” – spent 15 productive mins  talking with one person about this. To in-depth questioning of what we have done so far and how, revealed nothing.

I had a good chuckle at this tweet and must remember to bring it up at my next appraisal.

I generally don’t want to say too much because the research should speak for its self but its certainly got people thinking, talking and hopefully more of the BBC R&D project around object media will start to complete the picture of what’s possible and show the incredible value the BBC brings to the UK.

https://twitter.com/AndyRae_/status/672436090389794816

What Cinema can learn from Broadcasting?

IMG_8891

Its weirdly ironic that I wrote a blog post about what cinema can learn from TV, 3 years while ago almost to the day of the this way up conference in December I’m about to talk at.

The this way up conference is a film exhibition innovation conference which launched last year. It returns with a jam-packed two-day event that promises to inspire and enlighten, provoke and challenge, connect and share.

I’ll be doing two things on behalf of BBC R&D

The first one is on Wednesday and is a lunch time workshop around a unreleased Perceptive Media project, I have been working on for most of the year.

Lunchtime Lab: BBC Perceptive Media Want to contribute to the evolution of storytelling? BBC Research and Development’s North Lab, based at MediaCityUK in Salford, showcase their latest experiment in a top secret, closed door workshop. A select group of THIS WAY UP attendees will try out a new smartphone app before being a shown a premiere of a short film that looks to change the way we engage. Further details are strictly under wraps, but the BBC are looking for volunteers to take part in this limited study and to share and discuss their experiences with other participants. Workshop led by Ian Forrester, BBC R&D North lab. Results from the workshop will be revealed at Thursday’s The Film is Not Enough session.

Its really research in the wild and we have no idea how the audience will react to this. The results will be intriguing to say the least.

On the Thursday I’ll be on a panel talking about the changes which need to happen to regain the cinema audience.

The Film is not Enough – With the rise of event cinema, alternative content, enhanced screenings, sing-a-longs and tweet-a-longs, is there a danger that the original purpose of cinemas is being lost as audiences demand novelty and gimmickry? This panel will hear from those folk changing audience perceptions and expectations of what ‘coming to the cinema’ means. Panel includes: Tony Jones (Cambridge Film Festival), Jo Wingate (Sensoria), Rhidian Davis (BFI), Gaby Jenks (Abandon Normal Devices – chair), Lisa Brook (Live Cinema), and Ian Forrester (BBC Research & Development).

I’ll talk about details of the project experienced on Wednesday and explain why this is a good and scalable way to regaining the TV and maybe the cinema audience. The panel should be good with a number of really viewpoints and Gaby Jenks from Abandon Normal Devices chairing the debate.

What cinema can learn from broadcast will be driven home by the keynote from Nick North, the director of Audiences at the BBC.

Look out for more details soon… but theres already plenty of interest….

Programmatic media sounds a bit like Perceptive Media?

Kill Bill Advertising

I swear Tony sent me a tweet with a pointer to this piece titled Programmatic Beyond Advertising: A Not-So-Distant Future in CMF trends.

Its mainly about advertising including a bit about the just in time advertising space which is coming about because of the lightening speed of data and the ability to replace advertising/content on the fly.

Heard it all before but then there was this part…

…what if programmatic could be used for content other than advertising?

If we extend this thinking (and our imagination) a little further to consider the possible emergence of a new distribution method for cultural or editorial content based on programmatic logic and methods, we could ask whether these new “programmatic” models could be applied to the automated distribution of film and television content based on audiences and their data.

Based on this logic, “programmatic content distribution” could be imagined as a flow in which the data collected from users would trigger an automated rights transaction and content delivery process between right-holders and broadcasters. The final result would be the broadcasting of content corresponding to the preferences of the targeted user.

Yes indeed, this is the start of Perceptive Media, if you haven’t already guessed. Its always good to hear others make the same leaps in thinking of course…

Perceptive media in wired magazine

Programmatic media?  Don’t think that will fly as a term, I’m sorry to say. Although I have to say, this description would be more like responsive media than perceptive media.

It was in make do share warsaw that I first heard Lance Weiler talk about them in quite different contexts and it did make sense. Phil has been grouping them together as contextual media which works as a superset of both, although I worry about the previous examples of contextual media clouding things.

The next part of the article I’m less interested in but something I have thought about a tiny bit…

Moreover, it would be possible to monetize this video content by attaching superimposed or pre-roll ads to it, as commonly seen on video aggregation platforms.

This valuable collection of user data and preferences for viewing a movie or television show could be done on a voluntary basis; for example, users would simply answer a few questions on their mood, the type of movie or series, and the desired language and duration so that the platform can preselect and “program” content that meets their criteria.

But we know that the Web, which is very advanced in big data collection, is already capable of gathering this data using algorithms. Users’ actions on a given site—the keywords they search for, the links they click on, their daily search history—can indicate to the platforms what type of content they are likely to be interested in.

The problem they will get is the explicit nature of the input, I feel. Yes its easier on the web but the person is leaning forward interacting most of the time anyway. When you get into the living room it gets a little more tricky, and a implicit approach is better in my mind. Yes it can get creepy but it doesn’t break the immersion and in my mind thats very key.

The essence of the programmatic distribution mechanism would therefore be as a recommendation super-engine, more sophisticated than that currently found on various platforms.

Why is it everybody thinks fancy recommendation engines? If this is the ambition of the industry, I feel we should be breaking into another dimension. Hopefully some of the things I’m responsible for will match that ambition/moon shot.

Is the future of user interface design actually, perceptive?

Jason Silva in his latest shot of awe, talks about the paradox of choice we all face with the advances in technology and increase choice. He also mentioned the fast company piece about the trend towards less choice, especially in user interface design.

Companies are catching on quickly. With the realization that data is much more valuable when used with other information, protocol is increasingly being adopted to ensure that data sharing is seamless. With the explosion of both data collection and unification, we’re creating an environment that, while not fully exposed, is at least open enough for information to be meaningfully aggregated.

Taken together in four steps—collection, unification, analysis, and implementation—we have an environment where information is working for you behind the scenes to do things automatically, all in the service of letting you focus on what’s most important to you in work and life.

I have concerns about this along with my thoughts about who/whom is writing the software and what is their opinion?

What Jason and others are talking about is contextual design or as I prefer perceptive design (along with perceptive media). As context only explains half of the solution and frankly anticipatory design sounds like when I first talked about intrusive media.  It will never find the mindshare with a name like that!

I think of Apple products as anticipatory and antihacker. I remember the blog I wrote when I saw Aral talk about user experience at Thinking Digital in 2013.

Perceptive design needs to empower people with  chances and experiences for mastery, not enslave them and ultimately make them feel trapped, lost and cut off from others.

Why the ability to understand spoilers is perceptive media interesting

https://twitter.com/Jordan94jb/status/636896487981600768/

Most people don’t really care about spoilers till they are spoiled by somebody or something they read. Its incredibly frustrating to not know something and be in that state of wonder then somebody break it for you. There are many great spoilers out there like, the ending of lost, breaking bad, etc. I remember joking but with a quite a harsh tone for friend and family in hospital not to tell me the end of Lost.

The problem is with all the media channels we have, its more difficult to put yourself in a bubble and discover the media conclusion in your own way. This is something others have thought about a lot and this chrome extension is a interesting take on the problem, unfortunally it only works within the Trakt.tv site.

Trakt.tv but without the spoilers. Titles, screenshots and comments are all able to be obscured by this extension. This extension aims to prevent as many spoilers as possible on Trakt.tv with very customisable options.

Ok nice but whats this got to do with Perceptive Media?

Perceptive Media is most effective when there is a semantic understanding of the narrative, plot arcs and implicit desires of the audience.

With spoilers, if you knew where the audience was up to and how long ago they watched it (both Trakt.tv can do). You can infer what to hold back from them, so they are not spoiled of the next big surprise or twist. You can also let the stuff which isn’t important or seen already pass the filter instead of trying to hold it all back and frustrating the audience.

Basically spoiler prevention paves the way to a understanding of media in the way needed for perceptive media. Today its titles, screeenshots and comments. Tomorrow its popups, adverts, etc. In future how about parts of the news, articles, posts, parody, references to plot twists, etc…?

Variations not versions

https://twitter.com/martynkelly/status/624266599000838150

It was Si Lumb who tweeted me about Pixar’s Inside Out contextual visuals.

Now I know this isn’t anything new, I mean films have had region differences for a long while but its good to see it discussed openly and I was interesting to read about how (we think) they do it.

It’s interesting to note that the bottom five entries of the list, starting with “Thai Food,” remain consistent throughout (maybe Disney/Marvel Studios’ digital wizards couldn’t replace the stuff that Chris Evans’ hand passed over), but the top items change a lot.

Which leads me to think its all done in post production using things like impossible software?

Post producing this stuff is a mistake in my mind, but then again I’m working on the future of this kind of thing with Perceptive Media. I also imagine the writer and director had no time to think about variations for different countries, or wasn’t paid enough?

Rather than write up my thoughts of how to do this with digital cinema (isn’t this part of the promise of digital cinema?) plus I’m writing a paper with Anna frew about this. I thought it was about time I wrote something about the project I’m currently working on.

Visual Perceptive Media

Visual perceptive media is a short film which changes based on the person who is watching the video. It uses profiled data from a phone application to build a profile of the user via their music collection and some basic questions. The data then is used to inform what variations it should apply to the media when watched.

The variations are applied in real time and include different music, different colour grading, different video assets effects and much more. Were using the WebAudioAPI, WebGL and other open web technologies.

What makes this different or unique…?

  • We had buy in with the script writer and director (Julius Amedume was both and amazing) right from the very start which makes a massive difference. The scripts were written with all this in mind.
  • It was shot and edited with its intended purpose of making real-time variations.
  • Most things we (BBC R&D) have done in the responsive/perceptive area has been audio based and this I would say is a bit of moonshot moment like Breaking Out 3 years ago! Just what I feel the BBC should be doing.
  • Keeping with the core principle of Perceptive media, the app which Manchester based startup Percepiv (was moment.us, wondered if working with us had a hand in the name change?) created using there own very related technology. Is mainly using implicit data to build the profile. You can check out music+personality on your own android and iphone now.

Its going to be very cool and I believe we the  technology has gotten to the point where it makes sense that we can do this so seamlessly that people won’t even know or realise (this is something we will be testing in our lab). As Brian McHarg says, theres going to be some interesting water cooler conversations, but the slight variations are going to be even more subtle and interesting.

This is no branching narrative

I have been using the word variations throughout this post because I really want us to get away from the notion of edits or versions. I recently had the joy of going Learn Do, Share Warsaw. I was thinking about how to explain what our thinking was with the Visual Perceptive Media project. How do you explain which has 2 films genres with 6 established endings with 20+ types music genres and a endless number of lengths and effects?

This certainly isn’t a branching narrative and the idea of branching narrative is certainly not apt here. If this was a branching narrative, it would have upwards of 240 versions not including any of the more subtle effects to increase your viewing enjoyment. I considered them as variations and the language works, when you consider the photoshop variation tool. This was very handy when talking to others not so familiar with perceptive media.  But its only a step and makes you consider there might be editions…

I was talking to my manager Phil about it before heading to Warsaw and came up with something closer to the tesseract/hypercube in interstellar (if you not seen it/spoiler alert!)

Unlimited Variations

Unlimited isn’t quite right but the notion of time and variations which intersect is much closer to the idea. I say to Si Lumb maybe the way to show this would be in VR, as I certainly can’t visualise it easily.

When its up and running I’d love people to have a go and get some serious feedback.

On a loosely related subject, Tony Churnside also tweeted me about Perceptive Media breaking into the advertising industry.

Perceptive advertising is coming…?

not too much h20

Google wants to bring TV ads into the 21st century. The company has quietly announced a new local advertising service for Google Fiber that will make TV ads behave a lot more like internet ads. Using data from its set-top-boxes, Google (and advertisers) will know precisely how many times a particular local ad has been watched in homes with Google Fiber service. That might not sound like a big deal, but the industry-standard Nielsen ratings simply don’t offer that kind of information. Like on the web, Google will only charge for the number of views an ad receives.

We all knew it was coming but I always wondered why Google and the other data driven companies hadn’t really done anything about the massive opportunity of personalised marketing?

It’s not yet clear precisely how the system will work, but, similar to Google’s cornerstone AdWords business, algorithms might determine the best time to show you a certain ad. For instance, if you’re watching the news before flipping over to the football game, the system might determine that you should be served a different ad during halftime than your buddy who switched over to the game from Pawn Stars. Google says it will even be able to swap out ads on DVR’d programs, so you won’t be served an old or irrelevant advertisement if you watch a program a week after it originally aired. Fiber customers will have an option to disable ads based on viewing history

But that is just the start. There is still the notion that the adverts are solid pieces for media which must be played from start to the end. This is a mistake, which will break down over time. Context is king yes, but there is big question about how personal you should get?

Something Doc Searls talks a lot about… and cue the Uncanny Valley graph

Uncanny_valley

I am worried that in the rush to deliver context sensitive advertising and marketing, there will be too much which falls into the uncanny valley space. So much it will ruin the great uses of data and context like Perceptive Media. I always said it was little friendly touches not a sledgehammer to the face or other senses…

Imagine if media could scale?

Variable Length Documentary

People always ask what I do at work or the BBC. I generally and quite flippantly say build the future. It may seem like a bit of a joke but theres quite a lot of truth to it too. One such area of research is around the future of media and storytelling.

I decided with colleagues after the perceptive radio,  the radio needed content of its own. This lead to the idea of a variable length documentary which was first showed at Sheffield Doc Fest, which would scale based on a number. That number could be time, movement, attention, or something else.

Responsive Radio is a new experimental way to make radio content more personalised, relevant and flexible. Responsive radio creates the story you want at the length you’ve time for. And this is just the start of a broadcasting revolution.

Imagine if Serial or any podcast could scale to fit your journey to work? Thats the level of personalisation were talking about here. Non creepy, and actually useful.

The responsive radio (as it became) morphed into a much bigger project and finally you can go experience it for yourselves at BBC Taster. http://www.bbc.co.uk/taster/projects/responsive-radio

Perceptive Radio on BBC Radio 4

Official Perceptive Radio photo

It finally happened… Perceptive Media  and more specifically Perceptive Radio got a mention on BBC Radio 4’s You and Yours today. Now to be fair this isn’t the first time its been mentioned on the BBC but to have futurebroadcasts.com mentioned live on air, should increase the sample size for feedback which is critical for our research into Perceptive Media.

In usual style I made an archived version on archive.org. Although to be fair the You and Yours stays on iPlayer for about a year at a time.

Touches of Perceptive Media in odd places

Virgin toiletsVirgin talking Trains toilet

I wrote about the idea of Perceptive Media at a theme park a  while ago and frankly theres some equally fun places it could be used.

Every time I go to London and use the Virgin trains, I laugh inside to myself about the Virgin toilet signs. It reads..

“Please don’t flush Nappies, sanitary towels, paper towels, gum, old phones, unpaid bills, junk mail,  you’re ex’s sweater, hopes, dreams or goldfish down this toilet.”

In the bigger toilets the sign-age is spoken aloud, and you’re ex’s sweater is swapped with your *friends sweater. Its always gender specific.

My first thought was that it could be randomly done then maybe every other one it cycles? If it was up to me, I would hook it up to the toilet seat. If the seat is down, play the boyfriend version if the seat is up play the girlfriend version? I assume it wouldn’t be noticed by most, but those who did would think it was great!

A little bit of game-play in real life.

Perceptive music and beyond

Pet Shop Boys at the Brits 2009

Media relies on the ability to engineer peoples emotions. This can sound pretty bad but all media from romantic comedies made for cinema to the old classics from Shakespeare. The effect of media and ultimately storytelling has always fascinated me and I’m sure its the same for most people. Its hardwired in to us as Jason Silva puts it.

The ability to engineer someone’s emotions is interesting from a story point of view. However if you add broadcast, you can do this to a nation or the whole world. But like the 10% of any audience, which are highly suggestible, how do you reach the others?

A 600,000 person study Facebook and Cornell University did a while back but recently came to light might have a clue about how. However there has been a major push-back on the study for ethical reasons.

Facebook’s controversial study that manipulated users’ newsfeeds was not pre-approved by Cornell University’s ethics board, and Facebook may not have had “implied” user permission to conduct the study as researchers previously claimed.

Starting from a different place is Moment.us.(little disclaimer to say I may be working with this Manchester based startup in the near future, but only because their technology is mind blowing)

Moment.us, tracks and follows the users media habits. It watches as you choose songs (bit like scobbling apps like last.fm) when you pick them and records the context of when. Like certain types of song when your going for a ride to work on a sunny day.

Our proprietary algorithm, contextual database, analytics, understanding of and expertise in media, technology and user behaviour. Highly relevant, hyper-personal, socially integrated, context driven mobile experiences for consumers and unrivalled contextual consumer data for commercial organisations.

A while ago we pitched a project loosely called In Tune at the BBC Radio One Connected Studio which we felt was very credible but unfortunately the judges disagreed. Maybe it was the way we pitched it but there was a lot of doubt we had the data to do what we planning to do.

I have seen first hand the data points and been amazed at what patterns of activity our music listening can reveal about ourselves. Imagine what you could do if you were have access to that data and could engineer the music and therefore the experience?

Interestingly Google is getting in on the idea as they recently bought Songza.

2 conferences in 1 week (Sheffield Doc Fest & Primeconf)

This week just passed and I got to say it wasn’t half as bad as it seemed on paper or at least my calendar.

Sheffield documentary festival

Variable Documentary preview

I headed across to Sheffield on Sunday to give a talk with Tony Churnside at the Sheffield international documentary festival about Perceptive Media. It very went well and I kind of wished I stayed over so I could keep some of the conversations going and there was plenty else going on which I wanted to check out.

The festival seems to take over the whole city and the weather was great on the Sunday and Wednesday. Wednesday I didn’t talk but rather supported some collages who showed an early preview of the variable length documentary.

Next year I hope we will have a lot more to show, and next year I hope to spend more time at the rest of the festival.

Best of British / Primeconf

Primeconf: Best of British

This conference which started out on kickstarter and became a real conference arranged by long time friend Thayer Prime. It was a bit of a crazy idea but the result was something worthwhile and maybe the start of something new and interesting.

The speakers were as you can imagine by the title, British speakers.

It really was something special, and it was a joy to be a small part of the whole event.

I gave a shorter version of the dating, lies and algorithms talk I have been wanting to give. So look out it may be back sooner or later as a more involved talk. It went down well although I certainly did take out all the personal stuff and non PG-13 stuff to fit with the code of conduct. Something which sadly later in the day seemed to have got forgotten, with swearing and a questionable slide.

Regardless, I learned a number of things including Priya is behind changify.org  (something which we tried to do ages ago in the form of wedreamthecity) and could be helpful with gentrification and communities. Some other stand out presentations include Pete Duncanson, Chris Thorpe, Herb Kim, Dr Tom Crick, Amy Mather and a special mention of Mazz Mosley’s super low budget style of presentation. Love it! Good to finally meet her too.

Is Thayer going to do it again? I certainly think she should… I’m actually thinking Herb and Thayer could create something which is special? The venue was great (Royal Institution, yes the one they do the Royal Christmas lectures from!) and a good turn out.

Both events were well worth effort of attending and speaking at… For such a packed week going to London twice and Sheffield twice, I actually feel ok. Just a shame my treat of going to Thorpe Park wasn’t anything like when going in March/April.