If you are at Mydata, our event is in Hall H from 14:00 – 15:45 on the opening day of Wednesday 25th September.
More and more people live their lives online, and we are encouraged to view the internet as a public space. However the personal data we bring to this space can be used in many inappropriate ways: Instagram stories are scraped to target advertisement; faces in family photographs are used to train the ML systems that will scan crowds for suspects; the devices we thought we owned end up owning us; and our browsing histories are stored and scanned by governments and private companies. This creates a tension for public service organisations as they try to deliver value to audiences and users online.
In this session experts from the BBC Research & Development, Finnish Broadcasting Company YLE, and PublicSpaces will consider how to resolve these tensions, and look at some specific interventions aimed at providing value to audiences and communities through the responsible use of private data in online public spaces.
The format will be four brief talks and a round table discussion.
Chair: Rhianne Jones (BBC)
PublicSpaces and an internet for the common good: Sander van der Waal (PublicSpaces)
The Living Room of the Future: Ian Forrester (BBC)
How public service media can engage online; Aleksi Rossi (YLE)
Data Stewardship and the BBC Box: Jasmine Cox/ Max Leonard (BBC)
Its clear computational media is going to be a big trend in the next few years (if not now?). You may have heard about deepfakes in the news and thats just one end of the scale. Have a look through this flickr group. Its worth remembering HDR (high dynamic range) is a early/accepted type of computational. I expect in game/virtual photography is next, hence why I’ve shown in game photography to make the point of where we go next.
Its clear like every picture we see has been photoshopped, all media we will have to assume has been modified, computed or even completely generated. computational capture and machine vision/learning really is something which we have to grapple with. Media literacy and tools to more easily identify computational media are what is missing. But the computational genie is out of the bottle and can’t be put back.
While I cannot deny that my real world photography experience aids my virtual photography through the use of compositional techniques, directional lighting, depth of field, etc. there is nothing that you cannot learn through experience. In fact, virtual photography has also helped to develop my photography skills outside of games by enabling me to explore styles of imagery that I would not normally have engaged with. Naturally, my interest in detail still comes through but in the virtual world I have not only found a liking for portraiture that I simply don’t have with real humans, but can also conveniently experiment with otherwise impractical situations (where else can you photograph a superhero evading a rocket propelled grenade?) or capture profound emotions rarely exhibited openly in the real world!
Virtual photography has begun to uncover a huge wealth of artistic talent as people capture images of the games they love, in the way they interpret them; how you do it really is up to you.
Its a new type of media, with new sensibility and a new type of craft…
Something popped into my feed about some researchers paper saying you can snoop on the choices of people using Netflix’s interactive system. I’m hardly surprised as its typical network analysis and GDPR requests. But it reminds me how important the work we have done with perceptive media is.
I best explain it as delivering (broadcasting) the experience as a contained bundle which unfolds within the safe space (maybe living room) of the audience. Nothing is sent back to the cloud/base. This is closer to the concept of broadcast and means the audience/user(s) and their data isn’t surveil by the provider. This is exactly how podcasts use to work before podcast providers started focusing on metrics and providing apps which spy on their listeners. I would suggest the recent buy out of gimlet media by spotify might point this way too?
Of course the broadcast/delivery model this doesn’t work too well for surveillance capitalism but that frankly not my problem; and all audience interaction should be (especially under HDI) explicitly agreed before data is shared or exported.
I might be idealistic about this all but frankly I know I’m on the right side of history and maybe the coming backlash.
Its a 36 hours hackathon around responsive/perceptive/adaptive media experiences. Participants work as a team to brainstorm ideas, create prototypes of their own storytelling experiences. They will compete against the clock, not against each other sharing knowledge and expertise as they go. They won’t be alone, as they will have some excellent mentored by industry experts sharing their knowledge and experiences. Its all part of BBC Academy’s Manchester Digital Cities week.
The hackjam is only part the story. On the late afternoon of Thursday 28th Feb there will be a mini-conference titled Storytelling in the Internet Age. Where promising prototypes will be demoed to the audience.
Ideal participants are from the creative sectors such as,
Freelancers, Sole-traders and SMEs working in new media fields combining data with media,, may have tried twine, eko, inkle, etc
Producers and Directors interested in adaptive and non-linear narratives, may have tried twine, eko, inkle, etc
Students and Academics with a deep interest in object based media, adaptive narratives, interactive digital narrative
Artists exploring mixed media and non-linear narratives
“…while building this attraction I also wanted to change the usual one-sided relation – a situation where the body is overwhelmed by physical impressions but the machine itself remains indifferent, inattentive for what the body goes through. Neurotransmitter 3000 should therefore be more intimate, more reciprocal. That’s why I’ve developed a system to control the machine with biometric data. Using sensors, attached to the body of the passenger – measuring his heart rate, muscle tension, body temperature and orientation and gravity – the data is translated into variations in motion. And so, man and machine intensify their bond. They re-meet in a shared interspace, where human responsiveness becomes the input for a bionic conversation.”
Its a good idea but unfortunately couldn’t work on a rollercoasters, which is my thing. Or could it? For example everyones hand up in the air means what? The ride goes faster? How on earth does work? How meaningful would this be if you could actually do this?
Its one of the research questions we attempted to explore in the living room of the future. How can you combine different peoples personal data to construct a experience which is meaningful and not simply a medium of it all.
Creator Charlie Brooker told The New York Times that he won’t be making more interactive episodes of the Netflix series – so no more difficult cereal choices in the future. Asked what advice he had for anyone attempting to make interactive TV, Brooker added: “Run away. It’s harder than you think.”
I wonder if Bandersnatch will ultimately cause people to avoid IDNs (Interactive Digital Narratives) or adaptive narratives. It would be a real shame if it did but as Tom says in reply to my thoughts earlier today
I do wonder if Netflix has slightly done some damage by doing something so extreme? Something of a firework which everyone saw and caused a fire as it rained on peoples head?
Maybe James is right along with Tom? Explicit Interactive Digital Narratives has been done to death. You only have to look at the stuff Marian was doing in the mid- late 2000s with shapeshifting media.
I can predict in a year or so time, people will have forgotten Bandersnatch (packed away on a top shelf as James says) but this isn’t good news for all those other productions and experiments which may not be as smart but genuine a pleasure to be part of.
Would funding for IDN dry or boom because of Bandersnatch? Hard to tell at this stage.
What I would like from Netflix is some data/numbers on repeat viewings, paths people take, etc. If I was writing a paper, this would be a good experiment to be in on.
I hinted at Perceptive Podcasting previously in a post about being busy. I have finally come out of that busy period and am UK bound as my passport is due to expire.
Just before the busy period, I drafted a post about Perceptive Podcasting and why it’s not simply another unique project. It went up on the BBC R&D blog recently which is wonderful because I can point to that rather than the other way around.
Since we first launched the Perceptive Radio v1 in 2013 as a concept of what Perceptive Media (implicit interaction from sensors & data, adapting media objects) could become; the radio’s have always been a framework to explore further into adaptive object based media experiences. But we have always acknowledged the growing power of the smartphone and how it could be the container for so much more.
Of course I’ve started a few podcasts myself (recently Techgrumps and Lovegrumps) and love the fact it’s quite easy to get started and it can feel quite personal. I also found the diversity of podcasting quite interesting for example I’ve been listening to the guilty feminist, friends like us and risk, for quite sometime and find them fascinating every time.
Why a client for podcasts?
In 2017, you are seeing more webservices hosting podcasts like stitcher, (heck even Spotify is hosting some). At the server-side there is a lot you can do like dynamically change adverts, geo-fence media, etc. 60db are one such service doing nice things with podcasts but they are limited in what they can do, as they said in a comment on a similar post. But doing this all server-side is a pain, and tends to break the podcast idea of download-able audio (even if you have 4g everywhere), it feels more like the radio model of tuning in.
Imagine if you could do the server-side type of processing but on the actual device and even unlock the pools of sensor/data with the users consent? And imagine if the creators could use this in storytelling too!
Its Personal, Dynamic and Responsive without being creepy or infringing personal liberties, It adaptives to changes in context in real time. It dances with Interactivity and we are also exploring the value and feasibility of object based media approaches for engaging with audience. We believe that this offers the key to creating increasingly Immersive media experiences as it gives more story possibilities to the writer/director/producer. But also provides levels of tailored accessibility we have yet to imagine.
So many possibilities and its made in a very open way to encourage others to try making content in a object based way too.
I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.
No idea what was up with my voice, but it certainly sounds a little odd.
Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.
This is adapted from the BBC R&D blog post, but I felt it was important enough to repost on my own blog.
Object-based media (OBM) is something that BBC R&D has been working on for quite some-time. OBM underpins many media experiences including the one I keep banging on about, perceptive media.
I’ve spoken to thousands of producers, creators and developers across Europe about object-based work and the experiences. Through those discussions it’s become clear that people have many questions, there has been confusion about what OBM is, and other people would like to know how to get involved themselves.
So because of this… BBC R&D started a community of practice because we really do believe “Someday all content will be made this way.”
A community of practice brings together people and companies who are already working in the adaptive narrative field. BBC R&D do believe that the object-based approach is the key to content creation of the future, one which uses the attributes of the internet to let us all make more personal, interactive, responsive content and by learning together we can turn it into something which powers media beyond the scope of the BBC.
There are three big aims for the community of practice…
Awareness: Seek out people and organisations already interested in or working on adaptive narratives through talks, workshops and conferences
Advocacy: demonstrating best practice in our work and methods as we explore object-based media and connecting people through networks like the Storytellers United slack channel and helping share perspectives and knowledge..
Access: Early access to emerging software tools, to trial and shape the new technology together.
These aims are hugely important for the success and progress of object-based media.
As a start, we’re running a few events around the UK, because conferences are great but sometimes you just want to ask questions to someone and get a better sense of what and why. Our current plan is linked on the BBC R&D post which is being update by myself everytime a new event is made live.
I’m back at the Quantified self conference and it’s been a few years since due to scheduling and other conflicts. It’s actually been a while since I talked about the Quantified self mainly because I feel it’s so mainstream now, few people even know what it is, although they use things like Strava, fitbits, etc.
With home automation tools, it is now possible for your personal data to influence your environment. Soon, your personal data could be used to influence how a movie is shown to you! Let’s talk about the implications and ethics of data being used this way.
Its basically centered around the notion our presence effects the world around us. Directly linking Perceptive media and the Quantified self together. Of course I’m hoping to tease out some of the complexity of data ethics with people who full understand this and have skin in the game as such.
When I first heard about 60dB, I thought great someones finally made a object based podcasting client.
60dB brings you today’s best short audio stories – news, sports, entertainment, business and technology, all personalized for you.
Unfortunately I was wrong.
Its a bit like stitcher which is well loved by some people.s
It does seem to pick and play news stories. But the sources are specially crafted (ready for syndication like this) rather than the client processing the audio and picking out the parts most relevant to your listening preferences.
Its understandable because to do this you would need well thought-out metadata created by the original author/production. Without it you can’t have objects, without objects you are reliant on serious processing of the audio to build the metadata which the player can use (that or some serious computational power).
I had heard and thought it was a logical move for Google Play’s podcasting support would include some kind of basic automated metadata/transcript but it never happened. Another missed opportunity to show off the power of google and make themselves a essential part of the podcasting landscape, like how Apple did with itunes.
I’ve been studying this area for a long while; when I talk about perceptive media people always ask how this would work for news? I mean manipulate of feelings and what you see, can be used for good and obviously for very bad! Dare I say those words… Fake news?
Its always given me a slightly unsure feeling to be fair but there is a lot I see which gives me that feeling. In my heart of hearts, I kinda wish it wasn’t possible but wishing it so, won’t make it so.
It was Si lumb who first connected me with the facts behind the theory of what a system like perceptive media could be ultimately capable of. Its funny because many people laughed when I first talked about working with perceptiv whose mobile app under pinned the data source for visual perceptive media; I mean how can it build a profile about who I was in minutes from my music collection?
I was skeptical of course but the question always lingered. With enough data in a short time frame, could you know enough about someone to gage their general personality? And of course change the media they are consuming to reflect, reject or even nudge?
According to what I’ve read and seen in the following pieces about Cambridge analytics, the answer is yes! I included some key quotes I found interesting
Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether deduce whether someone’s parents were divorced.
Some insight into the connection between Dr. Michal Kosinski and Cambridge Analytica
Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends.
In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores
What they did with that rich data. Dark postings!
Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’”
Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds.
In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born.
“These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”
When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?
“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.
The US billionaire who helped bankroll Donald Trump’s campaign for the presidency played a key role in the campaign for Britain to leave the EU, the Observer has learned.
It has emerged that Robert Mercer, a hedge-fund billionaire, who helped to finance the Trump campaign and who was revealed this weekend as one of the owners of the rightwing Breitbart News Network, is a long-time friend of Nigel Farage. He directed his data analytics firm to provide expert advice to the Leave campaign on how to target swing voters via Facebook – a donation of services that was not declared to the electoral commission.
Cambridge Analytica, an offshoot of a British company, SCL Group, which has 25 years’ experience in military disinformation campaigns and “election management”, claims to use cutting-edge technology to build intimate psychometric profiles of voters to find and target their emotional triggers. Trump’s team paid the firm more than $6m (£4.8m) to target swing voters, and it has now emerged that Mercer also introduced the firm – in which he has a major stake – to Farage.
Some more detail as we know from the other posts previously
Until now, however, it was not known that Mercer had explicitly tried to influence the outcome of the referendum. Drawing on Cambridge Analytica’s advice, Leave.eu built up a huge database of supporters creating detailed profiles of their lives through open-source data it harvested via Facebook. The campaign then sent thousands of different versions of advertisements to people depending on what it had learned of their personalities.
A leading expert on the impact of technology on elections called the relevation “extremely disturbing and quite sinister”. Martin Moore, of King’s College London, said that “undisclosed support-in-kind is extremely troubling. It undermines the whole basis of our electoral system, that we should have a level playing field”.
But details of how people were being targeted with this technology raised more serious questions, he said. “We have no idea what people were being shown or not, which makes it frankly sinister. Maybe it wasn’t, but we have no way of knowing. There is no possibility of public scrutiny. I find this extremely worrying and disturbing.”
There is so much to say about all this and frankly its easy to be angry. But like Perceptive Media, it started off out of the academic sector. Someone took the idea and twisted it for no good. Is that a reason why we shouldn’t proceed forward with such research? I don’t think so…