It certainly something I’m also thinking a lot about when it comes to perceptive media. Experience which are simply not possible. The only way this is possible is with the combination of the real and virtual/media world. I’m still inspired by some of the thinking behind alternative reality gaming; mixing reality with directed and scalable experiences.
Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.
To be clear, I’m not down on Magic Leap, it is innovative but its more of the same. I only really interested in disruption right now. Something the tech industry needs (imho).
This paper‘s summary, sums up my thoughts, I feel…
The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people’s multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time.
Being able to drive and combine all these things together (even in a basic way – multisensory) has the potential to be far more exciting and immersive than Magic leap could even dream about. And its happening in dark and acdemic corners (I was maybe more excited by the vibrate API draft than learning about how magic leap may work – sad, who knows?). I’m sure they might be thinking the same but the fascination of the tech industry is on higher density A/V. Multisensory is moon shot. Being able to drive these on demand in an ethical, sustainable and contextual way is something I think a lot about with Perceptive Media. Being able to enable anyone to create their own experiences to share is the next thing.
We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.
The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.
The core of the workshop will focus on the question…
“Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?”
This week Thursday (26th May 2016), I’ll be speaking at Enterprise-IT Summit during Bucharest Technology Week, a celebration of the positive impact technology can have on our personal and professional lives. Its going to be at the Athénée Palace Hilton, in Bucharest.
I have never been to Romania or eastern europe till I went to Poland last year. but I am really looking forward to meeting all the great people involved in the digital & tech scene out there. Will be fun to testing their creative thinking in a little workshop following my talk on the same subject.
LJ Rich contacted me asking if I was up for an experiment. Of course I said yes, and without really knowing a few weeks later I was roped into taking part in BBC News #24Livestream on Facebook. It was a bit of surprise but an enjoyable one, shame about the technical difficults at the start.
We’re back!#24Live NOW: We’re taking an interactive look inside BBC Research and Development. Ever wanted to know what…
In recent times, Ian Forrester has turned his attention to ‘Visual Perceptive Media.’ As we first reported late last year, this applies the same principles to video-based content.
For the first experiment in Visual Perceptive Media, the BBC worked with a screenwriter who created a short drama with multiple starts and endings. In addition to the variable plot, a number of different soundtracks were prepared, and the video was treated with a range of color gradings to give it different moods, from cold and blue to warm and bright.
Good to see the next web picking up on the effort we put into making all this very open. This comes from before my time at BBC Backstage but it certainly makes things easier to justify with us being a public organisation haven done things like Backstage.
One thing that struck me when talking to the people working on all of these projects was that they were using the Web browser as their canvas and working with free-to-use, open technologies like OpenGL, Web Audio, Twitter Bootstrap and Facebook React.
And what better end than…
Some of the most interesting ideas for how that might happen are coming out of BBC R&D.
Imagine a world where the narrative, background music, colour grading and general feel of a drama is shaped in real time to suit your personality. This is called Visual Perceptive Media and we are making it now in our lab in MediaCityUK.
More details of the project will emerge soon, but I wanted to make certain things clear.
You are already seeing this happen with the movement in STEMS in music. However, while audio multiplication in the open environment of the web is easier via the WebAudioAPI. Theres no real unified API like the WebAudioAPI for Video. SMIL was that but it got sidelined as HTML5 pushed the capabilities in the browsers not mediaplayers.
There has been some criticism about the personality side of things.
Data ethics is something we have been thinking and talking about a lot. Earlier this year we created a microsite summing up some of our thoughts and raising opinions of some industry experts. The question about the filter bubble was talked about my many but we didn’t include it in the short documentaries, maybe now would be a good time to dig them out.
But before I dive into the deep end, its important to say we are using personality as simply a proxy for changing things. It could have been anything, as someone even suggested we could used shoe size. We used personality after meeting and being impressed by Preceptiv a long while ago by their technology.
The next thing was to connect the data to changeable aspects of a film. Film makers are very good at this and working with Julius Amedume (film director and writer) we explored the links between personality and effect. Colour grade and music were key ones along with shot choices, we felt were most achievable.
The panel discussion on Thursday was great. I gave the following presentation after Gabby asked me to give more context to the video here. I was my usual firestarter self and maybe caused people to think quite a bit. The trend towards events around film is welcomed and there are some great people doing amazing things but I was questioning film its self. We should demand more from the media of film…
Some of the feedback afterwards was quite amazing. I had everything from “This will not work!” – spent 15 productive mins talking with one person about this. To in-depth questioning of what we have done so far and how, revealed nothing.
I had a good chuckle at this tweet and must remember to bring it up at my next appraisal.
I generally don’t want to say too much because the research should speak for its self but its certainly got people thinking, talking and hopefully more of the BBC R&D project around object media will start to complete the picture of what’s possible and show the incredible value the BBC brings to the UK.
Amazing perceptive media from the @BBC r&d @cubicgarden real cutting edge of creating content, launches next year #TWU15
The this way up conference is a film exhibition innovation conference which launched last year. It returns with a jam-packed two-day event that promises to inspire and enlighten, provoke and challenge, connect and share.
Lunchtime Lab: BBC Perceptive Media – Want to contribute to the evolution of storytelling? BBC Research and Development’s North Lab, based at MediaCityUK in Salford, showcase their latest experiment in a top secret, closed door workshop. A select group of THIS WAY UP attendees will try out a new smartphone app before being a shown a premiere of a short film that looks to change the way we engage. Further details are strictly under wraps, but the BBC are looking for volunteers to take part in this limited study and to share and discuss their experiences with other participants. Workshop led by Ian Forrester, BBC R&D North lab. Results from the workshop will be revealed at Thursday’s The Film is Not Enough session.
Its really research in the wild and we have no idea how the audience will react to this. The results will be intriguing to say the least.
On the Thursday I’ll be on a panel talking about the changes which need to happen to regain the cinema audience.
The Film is not Enough – With the rise of event cinema, alternative content, enhanced screenings, sing-a-longs and tweet-a-longs, is there a danger that the original purpose of cinemas is being lost as audiences demand novelty and gimmickry? This panel will hear from those folk changing audience perceptions and expectations of what ‘coming to the cinema’ means. Panel includes: Tony Jones (Cambridge Film Festival), Jo Wingate (Sensoria), Rhidian Davis (BFI), Gaby Jenks (Abandon Normal Devices – chair), Lisa Brook (Live Cinema), and Ian Forrester (BBC Research & Development).
I’ll talk about details of the project experienced on Wednesday and explain why this is a good and scalable way to regaining the TV and maybe the cinema audience. The panel should be good with a number of really viewpoints and Gaby Jenks from Abandon Normal Devices chairing the debate.
Its mainly about advertising including a bit about the just in time advertising space which is coming about because of the lightening speed of data and the ability to replace advertising/content on the fly.
Heard it all before but then there was this part…
…what if programmatic could be used for content other than advertising?
If we extend this thinking (and our imagination) a little further to consider the possible emergence of a new distribution method for cultural or editorial content based on programmatic logic and methods, we could ask whether these new “programmatic” models could be applied to the automated distribution of film and television content based on audiences and their data.
Based on this logic, “programmatic content distribution” could be imagined as a flow in which the data collected from users would trigger an automated rights transaction and content delivery process between right-holders and broadcasters. The final result would be the broadcasting of content corresponding to the preferences of the targeted user.
Yes indeed, this is the start of Perceptive Media, if you haven’t already guessed. Its always good to hear others make the same leaps in thinking of course…
Programmatic media? Don’t think that will fly as a term, I’m sorry to say. Although I have to say, this description would be more like responsive media than perceptive media.
It was in make do share warsaw that I first heard Lance Weiler talk about them in quite different contexts and it did make sense. Phil has been grouping them together as contextual media which works as a superset of both, although I worry about the previous examples of contextual media clouding things.
The next part of the article I’m less interested in but something I have thought about a tiny bit…
Moreover, it would be possible to monetize this video content by attaching superimposed or pre-roll ads to it, as commonly seen on video aggregation platforms.
This valuable collection of user data and preferences for viewing a movie or television show could be done on a voluntary basis; for example, users would simply answer a few questions on their mood, the type of movie or series, and the desired language and duration so that the platform can preselect and “program” content that meets their criteria.
But we know that the Web, which is very advanced in big data collection, is already capable of gathering this data using algorithms. Users’ actions on a given site—the keywords they search for, the links they click on, their daily search history—can indicate to the platforms what type of content they are likely to be interested in.
The problem they will get is the explicit nature of the input, I feel. Yes its easier on the web but the person is leaning forward interacting most of the time anyway. When you get into the living room it gets a little more tricky, and a implicit approach is better in my mind. Yes it can get creepy but it doesn’t break the immersion and in my mind thats very key.
The essence of the programmatic distribution mechanism would therefore be as a recommendation super-engine, more sophisticated than that currently found on various platforms.
Companies are catching on quickly. With the realization that data is much more valuable when used with other information, protocol is increasingly being adopted to ensure that data sharing is seamless. With the explosion of both data collection and unification, we’re creating an environment that, while not fully exposed, is at least open enough for information to be meaningfully aggregated.
Taken together in four steps—collection, unification, analysis, and implementation—we have an environment where information is working for you behind the scenes to do things automatically, all in the service of letting you focus on what’s most important to you in work and life.
What Jason and others are talking about is contextual design or as I prefer perceptive design (along with perceptive media). As context only explains half of the solution and frankly anticipatory design sounds like when I first talked about intrusive media. It will never find the mindshare with a name like that!
Most people don’t really care about spoilers till they are spoiled by somebody or something they read. Its incredibly frustrating to not know something and be in that state of wonder then somebody break it for you. There are many great spoilers out there like, the ending of lost, breaking bad, etc. I remember joking but with a quite a harsh tone for friend and family in hospital not to tell me the end of Lost.
The problem is with all the media channels we have, its more difficult to put yourself in a bubble and discover the media conclusion in your own way. This is something others have thought about a lot and this chrome extension is a interesting take on the problem, unfortunally it only works within the Trakt.tv site.
Trakt.tv but without the spoilers. Titles, screenshots and comments are all able to be obscured by this extension. This extension aims to prevent as many spoilers as possible on Trakt.tv with very customisable options.
Ok nice but whats this got to do with Perceptive Media?
Perceptive Media is most effective when there is a semantic understanding of the narrative, plot arcs and implicit desires of the audience.
With spoilers, if you knew where the audience was up to and how long ago they watched it (both Trakt.tv can do). You can infer what to hold back from them, so they are not spoiled of the next big surprise or twist. You can also let the stuff which isn’t important or seen already pass the filter instead of trying to hold it all back and frustrating the audience.
Basically spoiler prevention paves the way to a understanding of media in the way needed for perceptive media. Today its titles, screeenshots and comments. Tomorrow its popups, adverts, etc. In future how about parts of the news, articles, posts, parody, references to plot twists, etc…?
It was Si Lumb who tweeted me about Pixar’s Inside Out contextual visuals.
Now I know this isn’t anything new, I mean films have had region differences for a long while but its good to see it discussed openly and I was interesting to read about how (we think) they do it.
It’s interesting to note that the bottom five entries of the list, starting with “Thai Food,” remain consistent throughout (maybe Disney/Marvel Studios’ digital wizards couldn’t replace the stuff that Chris Evans’ hand passed over), but the top items change a lot.
Post producing this stuff is a mistake in my mind, but then again I’m working on the future of this kind of thing with Perceptive Media. I also imagine the writer and director had no time to think about variations for different countries, or wasn’t paid enough?
Rather than write up my thoughts of how to do this with digital cinema (isn’t this part of the promise of digital cinema?) plus I’m writing a paper with Anna frew about this. I thought it was about time I wrote something about the project I’m currently working on.
Visual Perceptive Media
Visual perceptive media is a short film which changes based on the person who is watching the video. It uses profiled data from a phone application to build a profile of the user via their music collection and some basic questions. The data then is used to inform what variations it should apply to the media when watched.
The variations are applied in real time and include different music, different colour grading, different video assets effects and much more. Were using the WebAudioAPI, WebGL and other open web technologies.
What makes this different or unique…?
We had buy in with the script writer and director (Julius Amedume was both and amazing) right from the very start which makes a massive difference. The scripts were written with all this in mind.
It was shot and edited with its intended purpose of making real-time variations.
Keeping with the core principle of Perceptive media, the app which Manchester based startup Percepiv (was moment.us, wondered if working with us had a hand in the name change?) created using there own very related technology. Is mainly using implicit data to build the profile. You can check out music+personality on your own android and iphone now.
Its going to be very cool and I believe we the technology has gotten to the point where it makes sense that we can do this so seamlessly that people won’t even know or realise (this is something we will be testing in our lab). As Brian McHarg says, theres going to be some interesting water cooler conversations, but the slight variations are going to be even more subtle and interesting.
@BrianMcHarg@cubicgarden yeah, less "did you see", more "what was it like for you?" I'd love to see more like "let's play" coverage for tv.
I have been using the word variations throughout this post because I really want us to get away from the notion of edits or versions. I recently had the joy of going Learn Do, Share Warsaw. I was thinking about how to explain what our thinking was with the Visual Perceptive Media project. How do you explain which has 2 films genres with 6 established endings with 20+ types music genres and a endless number of lengths and effects?
This certainly isn’t a branching narrative and the idea of branching narrative is certainly not apt here. If this was a branching narrative, it would have upwards of 240 versions not including any of the more subtle effects to increase your viewing enjoyment. I considered them as variations and the language works, when you consider the photoshop variation tool. This was very handy when talking to others not so familiar with perceptive media. But its only a step and makes you consider there might be editions…
I was talking to my manager Phil about it before heading to Warsaw and came up with something closer to the tesseract/hypercube in interstellar (if you not seen it/spoiler alert!)
Unlimited isn’t quite right but the notion of time and variations which intersect is much closer to the idea. I say to Si Lumb maybe the way to show this would be in VR, as I certainly can’t visualise it easily.
When its up and running I’d love people to have a go and get some serious feedback.
Google wants to bring TV ads into the 21st century. The company has quietly announced a new local advertising service for Google Fiber that will make TV ads behave a lot more like internet ads. Using data from its set-top-boxes, Google (and advertisers) will know precisely how many times a particular local ad has been watched in homes with Google Fiber service. That might not sound like a big deal, but the industry-standard Nielsen ratings simply don’t offer that kind of information. Like on the web, Google will only charge for the number of views an ad receives.
It’s not yet clear precisely how the system will work, but, similar to Google’s cornerstone AdWords business, algorithms might determine the best time to show you a certain ad. For instance, if you’re watching the news before flipping over to the football game, the system might determine that you should be served a different ad during halftime than your buddy who switched over to the game from Pawn Stars. Google says it will even be able to swap out ads on DVR’d programs, so you won’t be served an old or irrelevant advertisement if you watch a program a week after it originally aired. Fiber customers will have an option to disable ads based on viewing history