Deja’Vu or generated reality

I saw ai artist conjures up convincing fake worlds from memories via si lumb and instantly thought about my experience of watching Vanilla Sky for the first time.

Could be incredible and terrifying for perceptive media, but alas the best technology always sits right on the fence, waiting for someone to drag it one direction or another?

Human & AI Powered Creativity in Storytelling from TOA Berlin 2017

I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.

No idea what was up with my voice, but it certainly sounds a little odd.

Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.

Someday all content will be made this way…

Lego Bricks

This is adapted from the BBC R&D blog post, but I felt it was important enough to repost on my own blog.

Object-based media (OBM) is something that BBC R&D has been working on for quite some-time. OBM underpins many media experiences including the one I keep banging on about, perceptive media.

I’ve spoken to thousands of producers, creators and developers across Europe about object-based work and the experiences. Through those discussions it’s become clear that people have many questions, there has been confusion about what OBM is, and other people would like to know how to get involved themselves.

So because of this… BBC R&D started a community of practice because we really do believe “Someday all content will be made this way.”

A community of practice brings together people and companies who are already working in the adaptive narrative field. BBC R&D do believe that the object-based approach is the key to content creation of the future, one which uses the attributes of the internet to let us all make more personal, interactive, responsive content and by learning together we can turn it into something which powers media beyond the scope of the BBC.

There are three big aims for the community of practice…

  • Awareness: Seek out people and organisations already interested in or working on adaptive narratives through talks, workshops and conferences
  • Advocacy: demonstrating best practice in our work and methods as we explore object-based media and connecting people through networks like the Storytellers United slack channel and helping share perspectives and knowledge..
  • Access: Early access to emerging software tools, to trial and shape the new technology together.

These aims are hugely important for the success and progress of object-based media.

As a start, we’re running a few events around the UK, because conferences are great but sometimes you just want to ask questions to someone and get a better sense of what and why. Our current plan is linked on the BBC R&D post which is being update by myself everytime a new event is made live.

Back at the Quantified Self conference in June

Quantified Self 2011

I’m back at the Quantified self conference and it’s been a few years since due to scheduling and other conflicts. It’s actually been a while since I talked about the Quantified self mainly because I feel it’s so mainstream now, few people even know what it is, although they use things like Strava, fitbits, etc.

The line up for the Quantified self confidence is looking very good and there’s plenty of good sessions for almost every palette and I’ll be heading up this session while at the conference.

Using Your Data To Influence Your Environment

With home automation tools, it is now possible for your personal data to influence your environment. Soon, your personal data could be used to influence how a movie is shown to you! Let’s talk about the implications and ethics of data being used this way.

Its basically centered around the notion our presence effects the world around us. Directly linking Perceptive media and the Quantified self together. Of course I’m hoping to tease out some of the complexity of data ethics with people who full understand this and have skin in the game as such.

I’m also looking to report back on this conference and restart the manchester quantified self group which went quiet a while ago.

Barbican’s black mirror exhibit

Black Mirror s1 ep 2: 15 million Merits

Interesting news recently, that London Barbican will be opening a exhibit around Black mirror episode S1 ep2: 15 million merits

I’ll be personally interested to see how far down the perceptive media (or as I use to call it intrusive TV) route they go? Also be interested to see if they use the chance to educate the public about data ethics and the value of data like the science museum have done.

​Cambridge analytica: The Rise of the Weaponized AI Propaganda

cambridgeanalytica

I’ve been studying this area for a long while; when I talk about perceptive media people always ask how this would work for news?  I mean manipulate of feelings and what you see, can be used for good and obviously for very bad! Dare I say those words… Fake news?

Its always given me a slightly unsure feeling to be fair but there is a lot I see which gives me that feeling. In my heart of hearts, I kinda wish it wasn’t possible but wishing it so, won’t make it so.

It was Si lumb who first connected me with the facts behind the theory of what a system like perceptive media could be ultimately capable of. Its funny because many people laughed when I first talked about working with perceptiv whose mobile app under pinned the data source for visual perceptive media; I mean how can it build a profile about who I was in minutes from my music collection?

I was skeptical of course but the question always lingered. With enough data in a short time frame, could you know enough about someone to gage their general personality? And of course change the media they are consuming to reflect, reject or even nudge?

According to what I’ve read and seen in the following pieces about Cambridge analytics, the answer is yes! I included some key quotes I found interesting

The Rise of the Weaponized AI Propaganda Machine

Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether deduce whether someone’s parents were divorced.

Some insight into the connection between Dr. Michal Kosinski and Cambridge Analytica

Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends.

In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores

What they did with that rich data. Dark postings!

Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’”

Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds.

In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born.

“These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”

The Data That Turned the World Upside Down

When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?

“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.

Revealed: how US billionaire helped to back Brexit

The US billionaire who helped bankroll Donald Trump’s campaign for the presidency played a key role in the campaign for Britain to leave the EU, the Observer has learned.

It has emerged that Robert Mercer, a hedge-fund billionaire, who helped to finance the Trump campaign and who was revealed this weekend as one of the owners of the rightwing Breitbart News Network, is a long-time friend of Nigel Farage. He directed his data analytics firm to provide expert advice to the Leave campaign on how to target swing voters via Facebook – a donation of services that was not declared to the electoral commission.

Cambridge Analytica, an offshoot of a British company, SCL Group, which has 25 years’ experience in military disinformation campaigns and “election management”, claims to use cutting-edge technology to build intimate psychometric profiles of voters to find and target their emotional triggers. Trump’s team paid the firm more than $6m (£4.8m) to target swing voters, and it has now emerged that Mercer also introduced the firm – in which he has a major stake – to Farage.

Some more detail as we know from the other posts previously

Until now, however, it was not known that Mercer had explicitly tried to influence the outcome of the referendum. Drawing on Cambridge Analytica’s advice, Leave.eu built up a huge database of supporters creating detailed profiles of their lives through open-source data it harvested via Facebook. The campaign then sent thousands of different versions of advertisements to people depending on what it had learned of their personalities.

A leading expert on the impact of technology on elections called the relevation “extremely disturbing and quite sinister”. Martin Moore, of King’s College London, said that “undisclosed support-in-kind is extremely troubling. It undermines the whole basis of our electoral system, that we should have a level playing field”.

But details of how people were being targeted with this technology raised more serious questions, he said. “We have no idea what people were being shown or not, which makes it frankly sinister. Maybe it wasn’t, but we have no way of knowing. There is no possibility of public scrutiny. I find this extremely worrying and disturbing.”

There is so much to say about all this and frankly its easy to be angry. But like Perceptive Media, it started off out of the academic sector. Someone took the idea and twisted it for no good. Is that a reason why we shouldn’t proceed forward with such research? I don’t think so…

Liverpool Life talking about Perceptive Media

I have recently spent a quite a bit of time in Liverpool, mainly for work but also slightly for pleasure . There were a few lectures/talks at FACT and Liverpool John Moores University.

Most of the presentations are on slideshare, as per usual but I also had joy of being interviewed as part of a podcast talking about object based media.

Of course most was edited out but there’s a big chunk of the interview, mainly focused on the experience of perceptive media, which sits right on top of object based media.. They described it as on the verge of a revolution, no less.

You can listen to the whole thing online at the Liverpool life audioboom channel from Feb 24th.

Busy in November…

image001

Everybody is busy on the run up to the Holidays but I didn’t expect to be out of the country so much in November. I had planned to be busy September, then October be about Mozfest (feeling guilty I still haven’t written about how Mozfest 2016 went). Then I’d focus on writing the TVX 2017 paper with Anna.

Here’s the lineup of places I’m due to be soon.

I’ll be talking about object based media and the big advantages of pursuing a internet first/driven stratergy and experiences in storytelling. I would be much more on the ball if I didn’t finally get the cold which I seemd to avoid all the way from May.

Perceptive Media at #tdcmcr video

Previously I mentioned the joy of talking at Thinking Digital Manchester.

I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.

The super efficiant Thinking Digital Conference, have already posted up the video of the talk. Even this took me by suprise as I was deep in the Mozfest Festival when it went live, they did thankfully fix the video error we had on the day. The slides for the talk are up on slideshare of course.

The post I wrote for BBC R&D is also live which summaries my thoughts about talking in IBC, FutureFest and Thinking Digital around Visual Perceptive Media.

Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?

This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?

I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.

Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…

What are hyper-reality experiences?

perceptive-media-ethics-dreams-hyper-reality-44-638

I talked previously about mixed reality but the consensus seems to be VR+AR = Mixed Reality… it looks like that ship has sailed and no matter what I say nothing will bring that back. So I have started talking about hyper-reality when discussing perceptive media across objects and things.

You could say its like a theatre cast in your living room and starts to answer some of the questions about perceptive media killing the shared experience. Theres already people hacking things to media, BBC R&D even experimented a long time ago in this area with the famous dalek example and of course the Perceptive Radio was just the start. The second version of the perceptive radio, did actually include more connectivity options to reach out and interact with devices in the local space such as Philips Hue lights, bluetooth devices, etc. It seems so simple but the big difference is they are reacting to the media rather than being thought about at the script/narrative level. With object based media (media+metadata) we can get to level much richer and interesting than ever imagined perviously.

Imagine what would happen if the director/writer could start to specify these type of experiences, the same way a director chooses to show certain characters in certain light, angles, etc. However the big difference is it can be contextual, flexible and scalable for 1 or many more people. How about that for a shared experience?

Of course this  brings up many ethical questions, data dilemmas, and questions about graceful degradation and progressive enhancement for media experiences. But I’m going to side step that in my blog for now. There are too many questions and research is well underway.

Ethics of personal data videos

Hyper-reality (or shall I call it hyper narratives, certainly can’t call it hypermedia) extends the narrative into the real world. This is fascinating because;

I contest this is closer to alternative reality gaming and the very popular immersive theatre works such as sleep no more. A problem with both is the scalability and consistency of experience, but whats great about them is the unique and shared experiences.

The Verge recently did a whats tech podcast which talks about immersive theatre, alternative reality games and the logical future of this stuff. Like the psychtech podcast episode 44, it highlights a lot of my current thinking and how all these things are connected. I always said the Internet of things needs a narrative because right now it all feels to service/utility. Even Google’s home project lacks that human-like narrative.

Internet of things needs a narraive

Some will sniff at this blog post but hyper-reality is the best word I can think of to explain what happens when you mix media objects, physical things, storytelling and context together.

Building virtual worlds is nice, augmenting the real world is better. However in my mind the future is those who explore the cross over of things, devices and media. Can you imagine the incredible levels of immersion?

 

Another busy period

Ordinary life does not interest me

For the next few weeks I’m pretty busy. My calendar looks like I may have eaten something I’m allergic to and threw up. Leaving you with that pretty nasty thought.

Some of the highlights include…

Its not so much that I’m doing lots of big stuff, rather all the little bits in between some interesting events. For example getting Visual Perceptive Media on BBC Taster in a sensible way. Writing a paper for TVX 2017, arranging DJ Hackday 2017, etc, etc…

The amount of blogging and tweeting might drop as a result. Sure my 5.5 tweets a day has seriously dropped, but I’m blaming Twitter for that.

I always said an ordinary life does not interest me, and there is a certain amount of hustle involved with this all.

More magic leap thoughts

I have been aware of magic leap for ages but since Dave sent me the piece about magic leap; I’ve been looking at more of their work and approaches.

This is when I watched the recording of Graeme Devine at the games for learning summit.

The over all idea of mixed reality I certainly would agree with… The important part is talking about worlds and experiences. Nothing about screens or devices. I would suggest the statement…

MixedReality-300x252

Mixed Reality is the mixture of the real world & virtual worlds. So that one understands the other. This creates experience that cannot possibly happen anywhere else.
– Graeme Devine

…as the Moon shot.

It certainly something I’m also thinking a lot about when it comes to perceptive media. Experience which are simply not possible. The only way this is possible is with the combination of the real and virtual/media world. I’m still inspired by some of the thinking behind alternative reality gaming; mixing reality with directed and scalable experiences.

I also found their company ethos of…

  • People are first
  • What we make will be better, not always new
  • The experience really matters

Quite interesting…

We need a magic leap for the other senses

Good friend Dave mentioned Magic Leap and sent me a link to how it may work. I had a read and although it was a reasonable read, I was less impressed than I maybe should have been. I get Magic Leap is the thing lots of people are getting a little moist about, its seems incredible but I share a small amount of the view-point of the blogger..

Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.

To be clear, I’m not down on Magic Leap, it is innovative but its more of the same. I only really interested in disruption right now. Something the tech industry needs (imho).

I already mentioned my thoughts about mixed reality and it hinges off the fact it’s not just visual and audible. I draw your attention to the interaction design rant (Touch), Smell and media (Smell) and of course the deeply problematic (Taste).

This paper‘s summary, sums up my thoughts, I feel…

The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people’s multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time.

Being able to drive and combine all these things together (even in a basic way – multisensory) has the potential to be far more exciting and immersive than Magic leap could even dream about. And its happening in dark and acdemic corners (I was maybe more excited by the vibrate API draft than learning about how magic leap may work – sad, who knows?). I’m sure they might be thinking the same but the fascination of the tech industry is on higher density A/V. Multisensory is moon shot. Being able to drive these on demand in an ethical, sustainable and contextual way is something I think a lot about with Perceptive Media. Being able to enable anyone to create their own experiences to share is the next thing.

Join us in exploring object-based media making tools


Like visual perceptive media? Like the concept of perceptive radio, like the javascript libraries we have put out in a open and public way? We want you to come on board and join us…!

We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.

The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.

The core of the workshop will focus on the question…

Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?

During the backdrop of the International Sheffield Documentary Festival the workshop on Monday 13th June will bring together, and are putting out a call for interested parties to work together with the aim of understanding how to develop tools which can benefit storytellers, designers, producers and developers.

We are calling for people, universities, startups, hackers and companies with a serious interest in opening up this area; to reach out and join us. Apply for a ticket and we will be in touch.