I saw ai artist conjures up convincing fake worlds from memories via si lumb and instantly thought about my experience of watching Vanilla Sky for the first time.
I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.
No idea what was up with my voice, but it certainly sounds a little odd.
Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.
This is adapted from the BBC R&D blog post, but I felt it was important enough to repost on my own blog.
I’ve spoken to thousands of producers, creators and developers across Europe about object-based work and the experiences. Through those discussions it’s become clear that people have many questions, there has been confusion about what OBM is, and other people would like to know how to get involved themselves.
So because of this… BBC R&D started a community of practice because we really do believe “Someday all content will be made this way.”
There are three big aims for the community of practice…
- Awareness: Seek out people and organisations already interested in or working on adaptive narratives through talks, workshops and conferences
- Advocacy: demonstrating best practice in our work and methods as we explore object-based media and connecting people through networks like the Storytellers United slack channel and helping share perspectives and knowledge..
- Access: Early access to emerging software tools, to trial and shape the new technology together.
These aims are hugely important for the success and progress of object-based media.
As a start, we’re running a few events around the UK, because conferences are great but sometimes you just want to ask questions to someone and get a better sense of what and why. Our current plan is linked on the BBC R&D post which is being update by myself everytime a new event is made live.
I’m back at the Quantified self conference and it’s been a few years since due to scheduling and other conflicts. It’s actually been a while since I talked about the Quantified self mainly because I feel it’s so mainstream now, few people even know what it is, although they use things like Strava, fitbits, etc.
The line up for the Quantified self confidence is looking very good and there’s plenty of good sessions for almost every palette and I’ll be heading up this session while at the conference.
Using Your Data To Influence Your Environment
With home automation tools, it is now possible for your personal data to influence your environment. Soon, your personal data could be used to influence how a movie is shown to you! Let’s talk about the implications and ethics of data being used this way.
Its basically centered around the notion our presence effects the world around us. Directly linking Perceptive media and the Quantified self together. Of course I’m hoping to tease out some of the complexity of data ethics with people who full understand this and have skin in the game as such.
I’m also looking to report back on this conference and restart the manchester quantified self group which went quiet a while ago.
When I first heard about 60dB, I thought great someones finally made a object based podcasting client.
60dB brings you today’s best short audio stories – news, sports, entertainment, business and technology, all personalized for you.
Unfortunately I was wrong.
Its a bit like stitcher which is well loved by some people.s
It does seem to pick and play news stories. But the sources are specially crafted (ready for syndication like this) rather than the client processing the audio and picking out the parts most relevant to your listening preferences.
Its understandable because to do this you would need well thought-out metadata created by the original author/production. Without it you can’t have objects, without objects you are reliant on serious processing of the audio to build the metadata which the player can use (that or some serious computational power).
I had heard and thought it was a logical move for Google Play’s podcasting support would include some kind of basic automated metadata/transcript but it never happened. Another missed opportunity to show off the power of google and make themselves a essential part of the podcasting landscape, like how Apple did with itunes.
Seems like a great opportunity for some enterprising startup, specially since podcasting might save the world. Dare I say it again, perceptive podcasts could be incredible for all the reasons podcasting originally captured peoples attention.
I’ll be personally interested to see how far down the perceptive media (or as I use to call it intrusive TV) route they go? Also be interested to see if they use the chance to educate the public about data ethics and the value of data like the science museum have done.
— Barbican Centre (@BarbicanCentre) April 7, 2017
I’ve been studying this area for a long while; when I talk about perceptive media people always ask how this would work for news? I mean manipulate of feelings and what you see, can be used for good and obviously for very bad! Dare I say those words… Fake news?
Its always given me a slightly unsure feeling to be fair but there is a lot I see which gives me that feeling. In my heart of hearts, I kinda wish it wasn’t possible but wishing it so, won’t make it so.
It was Si lumb who first connected me with the facts behind the theory of what a system like perceptive media could be ultimately capable of. Its funny because many people laughed when I first talked about working with perceptiv whose mobile app under pinned the data source for visual perceptive media; I mean how can it build a profile about who I was in minutes from my music collection?
I was skeptical of course but the question always lingered. With enough data in a short time frame, could you know enough about someone to gage their general personality? And of course change the media they are consuming to reflect, reject or even nudge?
According to what I’ve read and seen in the following pieces about Cambridge analytics, the answer is yes! I included some key quotes I found interesting
I have recently spent a quite a bit of time in Liverpool, mainly for work but also slightly for pleasure . There were a few lectures/talks at FACT and Liverpool John Moores University.
Of course most was edited out but there’s a big chunk of the interview, mainly focused on the experience of perceptive media, which sits right on top of object based media.. They described it as on the verge of a revolution, no less.
You can listen to the whole thing online at the Liverpool life audioboom channel from Feb 24th.
Everybody is busy on the run up to the Holidays but I didn’t expect to be out of the country so much in November. I had planned to be busy September, then October be about Mozfest (feeling guilty I still haven’t written about how Mozfest 2016 went). Then I’d focus on writing the TVX 2017 paper with Anna.
Here’s the lineup of places I’m due to be soon.
- Changing the Picture conference – Potsdam-Babelsberg, (near Berlin), Germany
- ARD/ZDF Media Academy – Hannover, Germany
- World Congress of Science and Factual producers – Stockholm, Sweden
I’ll be talking about object based media and the big advantages of pursuing a internet first/driven stratergy and experiences in storytelling. I would be much more on the ball if I didn’t finally get the cold which I seemd to avoid all the way from May.
Previously I mentioned the joy of talking at Thinking Digital Manchester.
I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.
The super efficiant Thinking Digital Conference, have already posted up the video of the talk. Even this took me by suprise as I was deep in the Mozfest Festival when it went live, they did thankfully fix the video error we had on the day. The slides for the talk are up on slideshare of course.
Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?
This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?
I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.
Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…
I talked previously about mixed reality but the consensus seems to be VR+AR = Mixed Reality… it looks like that ship has sailed and no matter what I say nothing will bring that back. So I have started talking about hyper-reality when discussing perceptive media across objects and things.
You could say its like a theatre cast in your living room and starts to answer some of the questions about perceptive media killing the shared experience. Theres already people hacking things to media, BBC R&D even experimented a long time ago in this area with the famous dalek example and of course the Perceptive Radio was just the start. The second version of the perceptive radio, did actually include more connectivity options to reach out and interact with devices in the local space such as Philips Hue lights, bluetooth devices, etc. It seems so simple but the big difference is they are reacting to the media rather than being thought about at the script/narrative level. With object based media (media+metadata) we can get to level much richer and interesting than ever imagined perviously.
Imagine what would happen if the director/writer could start to specify these type of experiences, the same way a director chooses to show certain characters in certain light, angles, etc. However the big difference is it can be contextual, flexible and scalable for 1 or many more people. How about that for a shared experience?
Of course this brings up many ethical questions, data dilemmas, and questions about graceful degradation and progressive enhancement for media experiences. But I’m going to side step that in my blog for now. There are too many questions and research is well underway.
Hyper-reality (or shall I call it hyper narratives, certainly can’t call it hypermedia) extends the narrative into the real world. This is fascinating because;
- It plays in the physical and digital spaces building a bridge between the two.
- It can start to explore other senses which are sadly missing from broadcasting and our screen obsessed media
- It could blend between the games, play, interaction, narrative and storytelling without much effort.
- It could be the ultimately expression of world building.
I contest this is closer to alternative reality gaming and the very popular immersive theatre works such as sleep no more. A problem with both is the scalability and consistency of experience, but whats great about them is the unique and shared experiences.
The Verge recently did a whats tech podcast which talks about immersive theatre, alternative reality games and the logical future of this stuff. Like the psychtech podcast episode 44, it highlights a lot of my current thinking and how all these things are connected. I always said the Internet of things needs a narrative because right now it all feels to service/utility. Even Google’s home project lacks that human-like narrative.
Some will sniff at this blog post but hyper-reality is the best word I can think of to explain what happens when you mix media objects, physical things, storytelling and context together.
Building virtual worlds is nice, augmenting the real world is better. However in my mind the future is those who explore the cross over of things, devices and media. Can you imagine the incredible levels of immersion?
For the next few weeks I’m pretty busy. My calendar looks like I may have eaten something I’m allergic to and threw up. Leaving you with that pretty nasty thought.
Some of the highlights include…
- Showing Visual Perceptive Media in the Future zone at IBC, Amsterdam
- Talking about the Perceptive Media and Data Ethics at FutureFest, London
- Spacewrangling the tale of two cities at Mozilla Festival 2016, London
- Talking about Visual Perceptive Media at Changing the picture, Babelsberg
Its not so much that I’m doing lots of big stuff, rather all the little bits in between some interesting events. For example getting Visual Perceptive Media on BBC Taster in a sensible way. Writing a paper for TVX 2017, arranging DJ Hackday 2017, etc, etc…
The amount of blogging and tweeting might drop as a result. Sure my 5.5 tweets a day has seriously dropped, but I’m blaming Twitter for that.
Feeling myself using Twitter less and less, which is a shame…
— Ian Forrester (@cubicgarden) September 6, 2016
I always said an ordinary life does not interest me, and there is a certain amount of hustle involved with this all.
I have been aware of magic leap for ages but since Dave sent me the piece about magic leap; I’ve been looking at more of their work and approaches.
This is when I watched the recording of Graeme Devine at the games for learning summit.
— Ian Forrester (@cubicgarden) September 1, 2016
The over all idea of mixed reality I certainly would agree with… The important part is talking about worlds and experiences. Nothing about screens or devices. I would suggest the statement…
Mixed Reality is the mixture of the real world & virtual worlds. So that one understands the other. This creates experience that cannot possibly happen anywhere else.
– Graeme Devine
…as the Moon shot.
It certainly something I’m also thinking a lot about when it comes to perceptive media. Experience which are simply not possible. The only way this is possible is with the combination of the real and virtual/media world. I’m still inspired by some of the thinking behind alternative reality gaming; mixing reality with directed and scalable experiences.
I also found their company ethos of…
- People are first
- What we make will be better, not always new
- The experience really matters
Good friend Dave mentioned Magic Leap and sent me a link to how it may work. I had a read and although it was a reasonable read, I was less impressed than I maybe should have been. I get Magic Leap is the thing lots of people are getting a little moist about, its seems incredible but I share a small amount of the view-point of the blogger..
Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.
To be clear, I’m not down on Magic Leap, it is innovative but its more of the same. I only really interested in disruption right now. Something the tech industry needs (imho).
I already mentioned my thoughts about mixed reality and it hinges off the fact it’s not just visual and audible. I draw your attention to the interaction design rant (Touch), Smell and media (Smell) and of course the deeply problematic (Taste).
This paper‘s summary, sums up my thoughts, I feel…
The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people’s multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time.
Being able to drive and combine all these things together (even in a basic way – multisensory) has the potential to be far more exciting and immersive than Magic leap could even dream about. And its happening in dark and acdemic corners (I was maybe more excited by the vibrate API draft than learning about how magic leap may work – sad, who knows?). I’m sure they might be thinking the same but the fascination of the tech industry is on higher density A/V. Multisensory is moon shot. Being able to drive these on demand in an ethical, sustainable and contextual way is something I think a lot about with Perceptive Media. Being able to enable anyone to create their own experiences to share is the next thing.
We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.
The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.
The core of the workshop will focus on the question…
“Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?”
During the backdrop of the International Sheffield Documentary Festival the workshop on Monday 13th June will bring together, and are putting out a call for interested parties to work together with the aim of understanding how to develop tools which can benefit storytellers, designers, producers and developers.
We are calling for people, universities, startups, hackers and companies with a serious interest in opening up this area; to reach out and join us. Apply for a ticket and we will be in touch.