I’ve been studying this area for a long while; when I talk about perceptive media people always ask how this would work for news? I mean manipulate of feelings and what you see, can be used for good and obviously for very bad! Dare I say those words… Fake news?
Its always given me a slightly unsure feeling to be fair but there is a lot I see which gives me that feeling. In my heart of hearts, I kinda wish it wasn’t possible but wishing it so, won’t make it so.
It was Si lumb who first connected me with the facts behind the theory of what a system like perceptive media could be ultimately capable of. Its funny because many people laughed when I first talked about working with perceptiv whose mobile app under pinned the data source for visual perceptive media; I mean how can it build a profile about who I was in minutes from my music collection?
I was skeptical of course but the question always lingered. With enough data in a short time frame, could you know enough about someone to gage their general personality? And of course change the media they are consuming to reflect, reject or even nudge?
According to what I’ve read and seen in the following pieces about Cambridge analytics, the answer is yes! I included some key quotes I found interesting
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether deduce whether someone’s parents were divorced.
Some insight into the connection between Dr. Michal Kosinski and Cambridge Analytica
Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends.
In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores
What they did with that rich data. Dark postings!
Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’”
Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds.
In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born.
“These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”
When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?
“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.
The US billionaire who helped bankroll Donald Trump’s campaign for the presidency played a key role in the campaign for Britain to leave the EU, the Observer has learned.
It has emerged that Robert Mercer, a hedge-fund billionaire, who helped to finance the Trump campaign and who was revealed this weekend as one of the owners of the rightwing Breitbart News Network, is a long-time friend of Nigel Farage. He directed his data analytics firm to provide expert advice to the Leave campaign on how to target swing voters via Facebook – a donation of services that was not declared to the electoral commission.
Cambridge Analytica, an offshoot of a British company, SCL Group, which has 25 years’ experience in military disinformation campaigns and “election management”, claims to use cutting-edge technology to build intimate psychometric profiles of voters to find and target their emotional triggers. Trump’s team paid the firm more than $6m (£4.8m) to target swing voters, and it has now emerged that Mercer also introduced the firm – in which he has a major stake – to Farage.
Some more detail as we know from the other posts previously
Until now, however, it was not known that Mercer had explicitly tried to influence the outcome of the referendum. Drawing on Cambridge Analytica’s advice, Leave.eu built up a huge database of supporters creating detailed profiles of their lives through open-source data it harvested via Facebook. The campaign then sent thousands of different versions of advertisements to people depending on what it had learned of their personalities.
A leading expert on the impact of technology on elections called the relevation “extremely disturbing and quite sinister”. Martin Moore, of King’s College London, said that “undisclosed support-in-kind is extremely troubling. It undermines the whole basis of our electoral system, that we should have a level playing field”.
But details of how people were being targeted with this technology raised more serious questions, he said. “We have no idea what people were being shown or not, which makes it frankly sinister. Maybe it wasn’t, but we have no way of knowing. There is no possibility of public scrutiny. I find this extremely worrying and disturbing.”
There is so much to say about all this and frankly its easy to be angry. But like Perceptive Media, it started off out of the academic sector. Someone took the idea and twisted it for no good. Is that a reason why we shouldn’t proceed forward with such research? I don’t think so…
Of course most was edited out but there’s a big chunk of the interview, mainly focused on the experience of perceptive media, which sits right on top of object based media.. They described it as on the verge of a revolution, no less.
Everybody is busy on the run up to the Holidays but I didn’t expect to be out of the country so much in November. I had planned to be busy September, then October be about Mozfest (feeling guilty I still haven’t written about how Mozfest 2016 went). Then I’d focus on writing the TVX 2017 paper with Anna.
I’ll be talking about object based media and the big advantages of pursuing a internet first/driven stratergy and experiences in storytelling. I would be much more on the ball if I didn’t finally get the cold which I seemd to avoid all the way from May.
I have always wanted to take to the stage of Thinking Digital and 3 years ago I joined Adrian at Thinking Digital Newcastle when the Perceptive Radio got its first public showing during a talk about the BBC innovation progress so far, since moving up the north of England. I got the chance to build on 3 years ago and talk about the work we are doing in object based media, data ethics and internet of things. I’ve been rattling this around my head and started calling it hyper-reality storytelling.
Visual Perceptive Media is made to deliberately nudge you one way or another using cinematic techniques rather than sweeping changes like those seen in branching narratives. Each change is subtle but they are used in film making every day, which raises the question of how do you even start to demo something which has 50000+ variations?
This is also the challenge we are exploring for a BBC Taster prototype. Our CAKE prototype deployed a behind the curtains view as well, which helped make it clear what was going on – it seems Visual Perceptive drama needs something similar?
I honestly do think about this problem in Visual Perceptive Media and Perceptive Media generally. Something which is meant to be so subtle you hardly notice but you need to demostrate it and show the benefits.
Its tricky, but lifting up the curtain seems to be the best way. I am of course all ears for better ways…
You could say its like a theatre cast in your living room and starts to answer some of the questions about perceptive media killingthe shared experience. Theres already people hacking things to media, BBC R&D even experimented a long time ago in this area with the famous dalek example and of course the Perceptive Radio was just the start. The second version of the perceptive radio, did actually include more connectivity options to reach out and interact with devices in the local space such as Philips Hue lights, bluetooth devices, etc. It seems so simple but the big difference is they are reacting to the media rather than being thought about at the script/narrative level. With object based media (media+metadata) we can get to level much richer and interesting than ever imagined perviously.
Imagine what would happen if the director/writer could start to specify these type of experiences, the same way a director chooses to show certain characters in certain light, angles, etc. However the big difference is it can be contextual, flexible and scalable for 1 or many more people. How about that for a shared experience?
Some will sniff at this blog post but hyper-reality is the best word I can think of to explain what happens when you mix media objects, physical things, storytelling and context together.
Building virtual worlds is nice, augmenting the real world is better. However in my mind the future is those who explore the cross over of things, devices and media. Can you imagine the incredible levels of immersion?
It certainly something I’m also thinking a lot about when it comes to perceptive media. Experience which are simply not possible. The only way this is possible is with the combination of the real and virtual/media world. I’m still inspired by some of the thinking behind alternative reality gaming; mixing reality with directed and scalable experiences.
Regardless if it turns out to be a consumer success or not, this is the first example of real innovation the tech industry has seen in some time. I am extremely excited to see what happens next for them and looking forward to the shake up this will put on the industry in general.
To be clear, I’m not down on Magic Leap, it is innovative but its more of the same. I only really interested in disruption right now. Something the tech industry needs (imho).
This paper‘s summary, sums up my thoughts, I feel…
The senses we call upon when interacting with technology are very restricted. We mostly rely on vision and audition, increasingly harnessing touch, whilst taste and smell remain largely underexploited. In spite of our current knowledge about sensory systems and sensory devices, the biggest stumbling block for progress concerns the need for a deeper understanding of people’s multisensory experiences in HCI. It is essential to determine what tactile, gustatory, and olfactory experiences we can design for, and how we can meaningfully stimulate such experiences when interacting with technology. Importantly, we need to determine the contribution of the different senses along with their interactions in order to design more effective and engaging digital multisensory experiences. Finally, it is vital to understand what the limitations are that come into play when users need to monitor more than one sense at a time.
Being able to drive and combine all these things together (even in a basic way – multisensory) has the potential to be far more exciting and immersive than Magic leap could even dream about. And its happening in dark and acdemic corners (I was maybe more excited by the vibrate API draft than learning about how magic leap may work – sad, who knows?). I’m sure they might be thinking the same but the fascination of the tech industry is on higher density A/V. Multisensory is moon shot. Being able to drive these on demand in an ethical, sustainable and contextual way is something I think a lot about with Perceptive Media. Being able to enable anyone to create their own experiences to share is the next thing.
We (BBC R&D) have been exploring the new reality of creating object based media through a range of prototypes. I have been exploring the implicit uses of data and sensors to change the objects; or as we started calling it a while ago Perceptive Media.
The big issue is to realisticily create and author these new types of stories, requires a lot of technical knowledge and doesn’t easily seat in the standard content creation workflow, or does it? We want to bring together people in a workshop format to explore the potential of creating accessible tools for authors and producers. Ultimately seeding a community of practice, through open experimentation learning from each other.
The core of the workshop will focus on the question…
“Is it desirable and feasible for the community of technical developers and media explorers to build an open set of tools for use by storytellers and producers?”
This week Thursday (26th May 2016), I’ll be speaking at Enterprise-IT Summit during Bucharest Technology Week, a celebration of the positive impact technology can have on our personal and professional lives. Its going to be at the Athénée Palace Hilton, in Bucharest.
I have never been to Romania or eastern europe till I went to Poland last year. but I am really looking forward to meeting all the great people involved in the digital & tech scene out there. Will be fun to testing their creative thinking in a little workshop following my talk on the same subject.
LJ Rich contacted me asking if I was up for an experiment. Of course I said yes, and without really knowing a few weeks later I was roped into taking part in BBC News #24Livestream on Facebook. It was a bit of surprise but an enjoyable one, shame about the technical difficults at the start.
We’re back!#24Live NOW: We’re taking an interactive look inside BBC Research and Development. Ever wanted to know what…
In recent times, Ian Forrester has turned his attention to ‘Visual Perceptive Media.’ As we first reported late last year, this applies the same principles to video-based content.
For the first experiment in Visual Perceptive Media, the BBC worked with a screenwriter who created a short drama with multiple starts and endings. In addition to the variable plot, a number of different soundtracks were prepared, and the video was treated with a range of color gradings to give it different moods, from cold and blue to warm and bright.
Good to see the next web picking up on the effort we put into making all this very open. This comes from before my time at BBC Backstage but it certainly makes things easier to justify with us being a public organisation haven done things like Backstage.
One thing that struck me when talking to the people working on all of these projects was that they were using the Web browser as their canvas and working with free-to-use, open technologies like OpenGL, Web Audio, Twitter Bootstrap and Facebook React.
And what better end than…
Some of the most interesting ideas for how that might happen are coming out of BBC R&D.
Imagine a world where the narrative, background music, colour grading and general feel of a drama is shaped in real time to suit your personality. This is called Visual Perceptive Media and we are making it now in our lab in MediaCityUK.
More details of the project will emerge soon, but I wanted to make certain things clear.
You are already seeing this happen with the movement in STEMS in music. However, while audio multiplication in the open environment of the web is easier via the WebAudioAPI. Theres no real unified API like the WebAudioAPI for Video. SMIL was that but it got sidelined as HTML5 pushed the capabilities in the browsers not mediaplayers.
There has been some criticism about the personality side of things.
Data ethics is something we have been thinking and talking about a lot. Earlier this year we created a microsite summing up some of our thoughts and raising opinions of some industry experts. The question about the filter bubble was talked about my many but we didn’t include it in the short documentaries, maybe now would be a good time to dig them out.
But before I dive into the deep end, its important to say we are using personality as simply a proxy for changing things. It could have been anything, as someone even suggested we could used shoe size. We used personality after meeting and being impressed by Preceptiv a long while ago by their technology.
The next thing was to connect the data to changeable aspects of a film. Film makers are very good at this and working with Julius Amedume (film director and writer) we explored the links between personality and effect. Colour grade and music were key ones along with shot choices, we felt were most achievable.
The panel discussion on Thursday was great. I gave the following presentation after Gabby asked me to give more context to the video here. I was my usual firestarter self and maybe caused people to think quite a bit. The trend towards events around film is welcomed and there are some great people doing amazing things but I was questioning film its self. We should demand more from the media of film…
Some of the feedback afterwards was quite amazing. I had everything from “This will not work!” – spent 15 productive mins talking with one person about this. To in-depth questioning of what we have done so far and how, revealed nothing.
I had a good chuckle at this tweet and must remember to bring it up at my next appraisal.
I generally don’t want to say too much because the research should speak for its self but its certainly got people thinking, talking and hopefully more of the BBC R&D project around object media will start to complete the picture of what’s possible and show the incredible value the BBC brings to the UK.
Amazing perceptive media from the @BBC r&d @cubicgarden real cutting edge of creating content, launches next year #TWU15
The this way up conference is a film exhibition innovation conference which launched last year. It returns with a jam-packed two-day event that promises to inspire and enlighten, provoke and challenge, connect and share.
Lunchtime Lab: BBC Perceptive Media – Want to contribute to the evolution of storytelling? BBC Research and Development’s North Lab, based at MediaCityUK in Salford, showcase their latest experiment in a top secret, closed door workshop. A select group of THIS WAY UP attendees will try out a new smartphone app before being a shown a premiere of a short film that looks to change the way we engage. Further details are strictly under wraps, but the BBC are looking for volunteers to take part in this limited study and to share and discuss their experiences with other participants. Workshop led by Ian Forrester, BBC R&D North lab. Results from the workshop will be revealed at Thursday’s The Film is Not Enough session.
Its really research in the wild and we have no idea how the audience will react to this. The results will be intriguing to say the least.
On the Thursday I’ll be on a panel talking about the changes which need to happen to regain the cinema audience.
The Film is not Enough – With the rise of event cinema, alternative content, enhanced screenings, sing-a-longs and tweet-a-longs, is there a danger that the original purpose of cinemas is being lost as audiences demand novelty and gimmickry? This panel will hear from those folk changing audience perceptions and expectations of what ‘coming to the cinema’ means. Panel includes: Tony Jones (Cambridge Film Festival), Jo Wingate (Sensoria), Rhidian Davis (BFI), Gaby Jenks (Abandon Normal Devices – chair), Lisa Brook (Live Cinema), and Ian Forrester (BBC Research & Development).
I’ll talk about details of the project experienced on Wednesday and explain why this is a good and scalable way to regaining the TV and maybe the cinema audience. The panel should be good with a number of really viewpoints and Gaby Jenks from Abandon Normal Devices chairing the debate.
Its mainly about advertising including a bit about the just in time advertising space which is coming about because of the lightening speed of data and the ability to replace advertising/content on the fly.
Heard it all before but then there was this part…
…what if programmatic could be used for content other than advertising?
If we extend this thinking (and our imagination) a little further to consider the possible emergence of a new distribution method for cultural or editorial content based on programmatic logic and methods, we could ask whether these new “programmatic” models could be applied to the automated distribution of film and television content based on audiences and their data.
Based on this logic, “programmatic content distribution” could be imagined as a flow in which the data collected from users would trigger an automated rights transaction and content delivery process between right-holders and broadcasters. The final result would be the broadcasting of content corresponding to the preferences of the targeted user.
Yes indeed, this is the start of Perceptive Media, if you haven’t already guessed. Its always good to hear others make the same leaps in thinking of course…
Programmatic media? Don’t think that will fly as a term, I’m sorry to say. Although I have to say, this description would be more like responsive media than perceptive media.
It was in make do share warsaw that I first heard Lance Weiler talk about them in quite different contexts and it did make sense. Phil has been grouping them together as contextual media which works as a superset of both, although I worry about the previous examples of contextual media clouding things.
The next part of the article I’m less interested in but something I have thought about a tiny bit…
Moreover, it would be possible to monetize this video content by attaching superimposed or pre-roll ads to it, as commonly seen on video aggregation platforms.
This valuable collection of user data and preferences for viewing a movie or television show could be done on a voluntary basis; for example, users would simply answer a few questions on their mood, the type of movie or series, and the desired language and duration so that the platform can preselect and “program” content that meets their criteria.
But we know that the Web, which is very advanced in big data collection, is already capable of gathering this data using algorithms. Users’ actions on a given site—the keywords they search for, the links they click on, their daily search history—can indicate to the platforms what type of content they are likely to be interested in.
The problem they will get is the explicit nature of the input, I feel. Yes its easier on the web but the person is leaning forward interacting most of the time anyway. When you get into the living room it gets a little more tricky, and a implicit approach is better in my mind. Yes it can get creepy but it doesn’t break the immersion and in my mind thats very key.
The essence of the programmatic distribution mechanism would therefore be as a recommendation super-engine, more sophisticated than that currently found on various platforms.