Everybody is busy on the run up to the Holidays but I didn’t expect to be out of the country so much in November. I had planned to be busy September, then October be about Mozfest (feeling guilty I still haven’t written about how Mozfest 2016 went). Then I’d focus on writing the TVX 2017 paper with Anna.
I’ll be talking about object based media and the big advantages of pursuing a internet first/driven stratergy and experiences in storytelling. I would be much more on the ball if I didn’t finally get the cold which I seemd to avoid all the way from May.
I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.
Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?
This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?
I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.
Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…
You could say its like a theatre cast in your living room and starts to answer some of the questions about perceptive media killingthe shared experience. Theres already people hacking things to media, BBC R&D even experimented a long time ago in this area with the famous dalek example and of course the Perceptive Radio was just the start. The second version of the perceptive radio, did actually include more connectivity options to reach out and interact with devices in the local space such as Philips Hue lights, bluetooth devices, etc. It seems so simple but the big difference is they are reacting to the media rather than being thought about at the script/narrative level. With object based media (media+metadata) we can get to level much richer and interesting than ever imagined perviously.
Imagine what would happen if the director/writer could start to specify these type of experiences, the same way a director chooses to show certain characters in certain light, angles, etc. However the big difference is it can be contextual, flexible and scalable for 1 or many more people. How about that for a shared experience?
Some will sniff at this blog post but hyper-reality is the best word I can think of to explain what happens when you mix media objects, physical things, storytelling and context together.
Building virtual worlds is nice, augmenting the real world is better. However in my mind the future is those who explore the cross over of things, devices and media. Can you imagine the incredible levels of immersion?
It certainly something I’m also thinking a lot about when it comes to perceptive media. Experience which are simply not possible. The only way this is possible is with the combination of the real and virtual/media world. I’m still inspired by some of the thinking behind alternative reality gaming; mixing reality with directed and scalable experiences.
Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.
To be clear, I’m not down on Magic Leap, it is innovative but its more of the same. I only really interested in disruption right now. Something the tech industry needs (imho).
This paper‘s summary, sums up my thoughts, I feel…
The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people’s multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time.
Being able to drive and combine all these things together (even in a basic way – multisensory) has the potential to be far more exciting and immersive than Magic leap could even dream about. And its happening in dark and acdemic corners (I was maybe more excited by the vibrate API draft than learning about how magic leap may work – sad, who knows?). I’m sure they might be thinking the same but the fascination of the tech industry is on higher density A/V. Multisensory is moon shot. Being able to drive these on demand in an ethical, sustainable and contextual way is something I think a lot about with Perceptive Media. Being able to enable anyone to create their own experiences to share is the next thing.
We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.
The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.
The core of the workshop will focus on the question…
“Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?”
This week Thursday (26th May 2016), I’ll be speaking at Enterprise-IT Summit during Bucharest Technology Week, a celebration of the positive impact technology can have on our personal and professional lives. Its going to be at the Athénée Palace Hilton, in Bucharest.
I have never been to Romania or eastern europe till I went to Poland last year. but I am really looking forward to meeting all the great people involved in the digital & tech scene out there. Will be fun to testing their creative thinking in a little workshop following my talk on the same subject.
LJ Rich contacted me asking if I was up for an experiment. Of course I said yes, and without really knowing a few weeks later I was roped into taking part in BBC News #24Livestream on Facebook. It was a bit of surprise but an enjoyable one, shame about the technical difficults at the start.
We’re back!#24Live NOW: We’re taking an interactive look inside BBC Research and Development. Ever wanted to know what…
In recent times, Ian Forrester has turned his attention to ‘Visual Perceptive Media.’ As we first reported late last year, this applies the same principles to video-based content.
For the first experiment in Visual Perceptive Media, the BBC worked with a screenwriter who created a short drama with multiple starts and endings. In addition to the variable plot, a number of different soundtracks were prepared, and the video was treated with a range of color gradings to give it different moods, from cold and blue to warm and bright.
Good to see the next web picking up on the effort we put into making all this very open. This comes from before my time at BBC Backstage but it certainly makes things easier to justify with us being a public organisation haven done things like Backstage.
One thing that struck me when talking to the people working on all of these projects was that they were using the Web browser as their canvas and working with free-to-use, open technologies like OpenGL, Web Audio, Twitter Bootstrap and Facebook React.
And what better end than…
Some of the most interesting ideas for how that might happen are coming out of BBC R&D.
Imagine a world where the narrative, background music, colour grading and general feel of a drama is shaped in real time to suit your personality. This is called Visual Perceptive Media and we are making it now in our lab in MediaCityUK.
More details of the project will emerge soon, but I wanted to make certain things clear.
You are already seeing this happen with the movement in STEMS in music. However, while audio multiplication in the open environment of the web is easier via the WebAudioAPI. Theres no real unified API like the WebAudioAPI for Video. SMIL was that but it got sidelined as HTML5 pushed the capabilities in the browsers not mediaplayers.
There has been some criticism about the personality side of things.
Data ethics is something we have been thinking and talking about a lot. Earlier this year we created a microsite summing up some of our thoughts and raising opinions of some industry experts. The question about the filter bubble was talked about my many but we didn’t include it in the short documentaries, maybe now would be a good time to dig them out.
But before I dive into the deep end, its important to say we are using personality as simply a proxy for changing things. It could have been anything, as someone even suggested we could used shoe size. We used personality after meeting and being impressed by Preceptiv a long while ago by their technology.
The next thing was to connect the data to changeable aspects of a film. Film makers are very good at this and working with Julius Amedume (film director and writer) we explored the links between personality and effect. Colour grade and music were key ones along with shot choices, we felt were most achievable.
The panel discussion on Thursday was great. I gave the following presentation after Gabby asked me to give more context to the video here. I was my usual firestarter self and maybe caused people to think quite a bit. The trend towards events around film is welcomed and there are some great people doing amazing things but I was questioning film its self. We should demand more from the media of film…
Some of the feedback afterwards was quite amazing. I had everything from “This will not work!” – spent 15 productive mins talking with one person about this. To in-depth questioning of what we have done so far and how, revealed nothing.
I had a good chuckle at this tweet and must remember to bring it up at my next appraisal.
I generally don’t want to say too much because the research should speak for its self but its certainly got people thinking, talking and hopefully more of the BBC R&D project around object media will start to complete the picture of what’s possible and show the incredible value the BBC brings to the UK.
Amazing perceptive media from the @BBC r&d @cubicgarden real cutting edge of creating content, launches next year #TWU15
The this way up conference is a film exhibition innovation conference which launched last year. It returns with a jam-packed two-day event that promises to inspire and enlighten, provoke and challenge, connect and share.
Lunchtime Lab: BBC Perceptive Media – Want to contribute to the evolution of storytelling? BBC Research and Development’s North Lab, based at MediaCityUK in Salford, showcase their latest experiment in a top secret, closed door workshop. A select group of THIS WAY UP attendees will try out a new smartphone app before being a shown a premiere of a short film that looks to change the way we engage. Further details are strictly under wraps, but the BBC are looking for volunteers to take part in this limited study and to share and discuss their experiences with other participants. Workshop led by Ian Forrester, BBC R&D North lab. Results from the workshop will be revealed at Thursday’s The Film is Not Enough session.
Its really research in the wild and we have no idea how the audience will react to this. The results will be intriguing to say the least.
On the Thursday I’ll be on a panel talking about the changes which need to happen to regain the cinema audience.
The Film is not Enough – With the rise of event cinema, alternative content, enhanced screenings, sing-a-longs and tweet-a-longs, is there a danger that the original purpose of cinemas is being lost as audiences demand novelty and gimmickry? This panel will hear from those folk changing audience perceptions and expectations of what ‘coming to the cinema’ means. Panel includes: Tony Jones (Cambridge Film Festival), Jo Wingate (Sensoria), Rhidian Davis (BFI), Gaby Jenks (Abandon Normal Devices – chair), Lisa Brook (Live Cinema), and Ian Forrester (BBC Research & Development).
I’ll talk about details of the project experienced on Wednesday and explain why this is a good and scalable way to regaining the TV and maybe the cinema audience. The panel should be good with a number of really viewpoints and Gaby Jenks from Abandon Normal Devices chairing the debate.
Its mainly about advertising including a bit about the just in time advertising space which is coming about because of the lightening speed of data and the ability to replace advertising/content on the fly.
Heard it all before but then there was this part…
…what if programmatic could be used for content other than advertising?
If we extend this thinking (and our imagination) a little further to consider the possible emergence of a new distribution method for cultural or editorial content based on programmatic logic and methods, we could ask whether these new “programmatic” models could be applied to the automated distribution of film and television content based on audiences and their data.
Based on this logic, “programmatic content distribution” could be imagined as a flow in which the data collected from users would trigger an automated rights transaction and content delivery process between right-holders and broadcasters. The final result would be the broadcasting of content corresponding to the preferences of the targeted user.
Yes indeed, this is the start of Perceptive Media, if you haven’t already guessed. Its always good to hear others make the same leaps in thinking of course…
Programmatic media? Don’t think that will fly as a term, I’m sorry to say. Although I have to say, this description would be more like responsive media than perceptive media.
It was in make do share warsaw that I first heard Lance Weiler talk about them in quite different contexts and it did make sense. Phil has been grouping them together as contextual media which works as a superset of both, although I worry about the previous examples of contextual media clouding things.
The next part of the article I’m less interested in but something I have thought about a tiny bit…
Moreover, it would be possible to monetize this video content by attaching superimposed or pre-roll ads to it, as commonly seen on video aggregation platforms.
This valuable collection of user data and preferences for viewing a movie or television show could be done on a voluntary basis; for example, users would simply answer a few questions on their mood, the type of movie or series, and the desired language and duration so that the platform can preselect and “program” content that meets their criteria.
But we know that the Web, which is very advanced in big data collection, is already capable of gathering this data using algorithms. Users’ actions on a given site—the keywords they search for, the links they click on, their daily search history—can indicate to the platforms what type of content they are likely to be interested in.
The problem they will get is the explicit nature of the input, I feel. Yes its easier on the web but the person is leaning forward interacting most of the time anyway. When you get into the living room it gets a little more tricky, and a implicit approach is better in my mind. Yes it can get creepy but it doesn’t break the immersion and in my mind thats very key.
The essence of the programmatic distribution mechanism would therefore be as a recommendation super-engine, more sophisticated than that currently found on various platforms.
Companies are catching on quickly. With the realization that data is much more valuable when used with other information, protocol is increasingly being adopted to ensure that data sharing is seamless. With the explosion of both data collection and unification, we’re creating an environment that, while not fully exposed, is at least open enough for information to be meaningfully aggregated.
Taken together in four steps—collection, unification, analysis, and implementation—we have an environment where information is working for you behind the scenes to do things automatically, all in the service of letting you focus on what’s most important to you in work and life.
What Jason and others are talking about is contextual design or as I prefer perceptive design (along with perceptive media). As context only explains half of the solution and frankly anticipatory design sounds like when I first talked about intrusive media. It will never find the mindshare with a name like that!
Most people don’t really care about spoilers till they are spoiled by somebody or something they read. Its incredibly frustrating to not know something and be in that state of wonder then somebody break it for you. There are many great spoilers out there like, the ending of lost, breaking bad, etc. I remember joking but with a quite a harsh tone for friend and family in hospital not to tell me the end of Lost.
The problem is with all the media channels we have, its more difficult to put yourself in a bubble and discover the media conclusion in your own way. This is something others have thought about a lot and this chrome extension is a interesting take on the problem, unfortunally it only works within the Trakt.tv site.
Trakt.tv but without the spoilers. Titles, screenshots and comments are all able to be obscured by this extension. This extension aims to prevent as many spoilers as possible on Trakt.tv with very customisable options.
Ok nice but whats this got to do with Perceptive Media?
Perceptive Media is most effective when there is a semantic understanding of the narrative, plot arcs and implicit desires of the audience.
With spoilers, if you knew where the audience was up to and how long ago they watched it (both Trakt.tv can do). You can infer what to hold back from them, so they are not spoiled of the next big surprise or twist. You can also let the stuff which isn’t important or seen already pass the filter instead of trying to hold it all back and frustrating the audience.
Basically spoiler prevention paves the way to a understanding of media in the way needed for perceptive media. Today its titles, screeenshots and comments. Tomorrow its popups, adverts, etc. In future how about parts of the news, articles, posts, parody, references to plot twists, etc…?