​Cambridge analytica: The Rise of the Weaponized AI Propaganda

cambridgeanalytica

I’ve been studying this area for a long while; when I talk about perceptive media people always ask how this would work for news?  I mean manipulate of feelings and what you see, can be used for good and obviously for very bad! Dare I say those words… Fake news?

Its always given me a slightly unsure feeling to be fair but there is a lot I see which gives me that feeling. In my heart of hearts, I kinda wish it wasn’t possible but wishing it so, won’t make it so.

It was Si lumb who first connected me with the facts behind the theory of what a system like perceptive media could be ultimately capable of. Its funny because many people laughed when I first talked about working with perceptiv whose mobile app under pinned the data source for visual perceptive media; I mean how can it build a profile about who I was in minutes from my music collection?

I was skeptical of course but the question always lingered. With enough data in a short time frame, could you know enough about someone to gage their general personality? And of course change the media they are consuming to reflect, reject or even nudge?

According to what I’ve read and seen in the following pieces about Cambridge analytics, the answer is yes! I included some key quotes I found interesting

The Rise of the Weaponized AI Propaganda Machine

Remarkably reliable deductions could be drawn from simple online actions. For example, men who “liked” the cosmetics brand MAC were slightly more likely to be gay; one of the best indicators for heterosexuality was “liking” Wu-Tang Clan. Followers of Lady Gaga were most probably extroverts, while those who “liked” philosophy tended to be introverts. While each piece of such information is too weak to produce a reliable prediction, when tens, hundreds, or thousands of individual data points are combined, the resulting predictions become really accurate.
Kosinski and his team tirelessly refined their models. In 2012, Kosinski proved that on the basis of an average of 68 Facebook “likes” by a user, it was possible to predict their skin color (with 95 percent accuracy), their sexual orientation (88 percent accuracy), and their affiliation to the Democratic or Republican party (85 percent). But it didn’t stop there. Intelligence, religious affiliation, as well as alcohol, cigarette and drug use, could all be determined. From the data it was even possible to deduce whether deduce whether someone’s parents were divorced.

Some insight into the connection between Dr. Michal Kosinski and Cambridge Analytica

Any company can aggregate and purchase big data, but Cambridge Analytica has developed a model to translate that data into a personality profile used to predict, then ultimately change your behavior. That model itself was developed by paying a Cambridge psychology professor to copy the groundbreaking original research of his colleague through questionable methods that violated Amazon’s Terms of Service. Based on its origins, Cambridge Analytica appears ready to capture and buy whatever data it needs to accomplish its ends.

In 2013, Dr. Michal Kosinski, then a PhD. candidate at the University of Cambridge’s Psychometrics Center, released a groundbreaking study announcing a new model he and his colleagues had spent years developing. By correlating subjects’ Facebook Likes with their OCEAN scores

What they did with that rich data. Dark postings!

Dark posts were also used to depress voter turnout among key groups of democratic voters. “In this election, dark posts were used to try to suppress the African-American vote,” wrote journalist and Open Society fellow McKenzie Funk in a New York Times editorial. “According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous ‘super predator’ line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake.’”

Because dark posts are only visible to the targeted users, there’s no way for anyone outside of Analytica or the Trump campaign to track the content of these ads. In this case, there was no SEC oversight, no public scrutiny of Trump’s attack ads. Just the rapid-eye-movement of millions of individual users scanning their Facebook feeds.

In the weeks leading up to a final vote, a campaign could launch a $10–100 million dark post campaign targeting just a few million voters in swing districts and no one would know. This may be where future ‘black-swan’ election upsets are born.

“These companies,” Moore says, “have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”

The Data That Turned the World Upside Down

When it was announced in June 2016 that Trump had hired Cambridge Analytica, the establishment in Washington just turned up their noses. Foreign dudes in tailor-made suits who don’t understand the country and its people? Seriously?

“It is my privilege to speak to you today about the power of Big Data and psychographics in the electoral process.” The logo of Cambridge Analytica— a brain composed of network nodes, like a map, appears behind Alexander Nix. “Only 18 months ago, Senator Cruz was one of the less popular candidates,” explains the blonde man in a cut-glass British accent, which puts Americans on edge the same way that a standard German accent can unsettle Swiss people. “Less than 40 percent of the population had heard of him,” another slide says. Cambridge Analytica had become involved in the US election campaign almost two years earlier, initially as a consultant for Republicans Ben Carson and Ted Cruz. Cruz—and later Trump—was funded primarily by the secretive US software billionaire Robert Mercer who, along with his daughter Rebekah, is reported to be the largest investor in Cambridge Analytica.

Revealed: how US billionaire helped to back Brexit

The US billionaire who helped bankroll Donald Trump’s campaign for the presidency played a key role in the campaign for Britain to leave the EU, the Observer has learned.

It has emerged that Robert Mercer, a hedge-fund billionaire, who helped to finance the Trump campaign and who was revealed this weekend as one of the owners of the rightwing Breitbart News Network, is a long-time friend of Nigel Farage. He directed his data analytics firm to provide expert advice to the Leave campaign on how to target swing voters via Facebook – a donation of services that was not declared to the electoral commission.

Cambridge Analytica, an offshoot of a British company, SCL Group, which has 25 years’ experience in military disinformation campaigns and “election management”, claims to use cutting-edge technology to build intimate psychometric profiles of voters to find and target their emotional triggers. Trump’s team paid the firm more than $6m (£4.8m) to target swing voters, and it has now emerged that Mercer also introduced the firm – in which he has a major stake – to Farage.

Some more detail as we know from the other posts previously

Until now, however, it was not known that Mercer had explicitly tried to influence the outcome of the referendum. Drawing on Cambridge Analytica’s advice, Leave.eu built up a huge database of supporters creating detailed profiles of their lives through open-source data it harvested via Facebook. The campaign then sent thousands of different versions of advertisements to people depending on what it had learned of their personalities.

A leading expert on the impact of technology on elections called the relevation “extremely disturbing and quite sinister”. Martin Moore, of King’s College London, said that “undisclosed support-in-kind is extremely troubling. It undermines the whole basis of our electoral system, that we should have a level playing field”.

But details of how people were being targeted with this technology raised more serious questions, he said. “We have no idea what people were being shown or not, which makes it frankly sinister. Maybe it wasn’t, but we have no way of knowing. There is no possibility of public scrutiny. I find this extremely worrying and disturbing.”

There is so much to say about all this and frankly its easy to be angry. But like Perceptive Media, it started off out of the academic sector. Someone took the idea and twisted it for no good. Is that a reason why we shouldn’t proceed forward with such research? I don’t think so…

Facebook checkins turned into advertising, quit moaning…

Its intriguing to see ideas you’ve had previously, explored and implemented. I wrote a while ago about mydreamscape and how it was going to make money. One of my suggestions was product and locational placements.

Maybe a lot of people are dreaming about a certain location or a certain product. If you own that location or product you may want to own that page and make it more like yours? So for example http://www.mydreamscape.org/items/buzzlightyear/ – could be a page about buzzlight year in dreams and have images and links to the item its self. This would also be true of locations too for example http://www.mydreamscape.org/location/europe/london/thamesbarrier – would obviously link to the Thames barrier in London with information taken from Wikipedia.org and other open sources. The information architecture of exactly how this would this work needs to be sorted out.

Realizing this in my head, but decided not to include the option of having people who were spokespeople for a certain thing in there dreams. So realistically if I was to consistently have dreams about buzzlight year not only would I be featured on a item page but I’d be highly ranked. So if one of my friends was to have a dream about buzzlight year not only would they have a link to /buzzlightyear but my friends thoughts or dreams would be ranked much higher. Of course this would change once Pixar decided to own that space.

Sounds confusing…?

Well Facebook just included this feature in a slightly different way…

If someone checks you in to a certain place or likes a certain thing. Facebook can and will use your location/like to advertise to your friends that thing/location.

So back to mydreamscape, you would get. Ian Forrester had a dream about the Amazon Kindle automatically but the difference here is Amazon would be able to pick and choose which stories they would use in the advertising. So you don’t get that embarrassing problem, where a person has a negative dream and the advertising is based of it. Just because someone checks into Starbucks doesn’t mean they had a positive experience there, so to run it across a human eye makes sense to me.

Ok now thats out the way, I would agree that the whole process of mining users likes/checkins for data they can use for advertising purposes really sucks. But then again, to be fair to facebook. Its all in there in the EULA. If you don’t like it, for goodness sake switch to something else or stop using it.

Just quit moaning…