Watch the social dilemma for free on youtube now

Youtube’s algorithm highlighted to me that the Social Dilemma is available in full. I thought it was another upload by someone but noticed the uploader was Netflix.

It looks like they are sharing the whole thing for free till the end of September.

The social dilemma

Although I wasn’t glowing about the documentary and think there is better, I did give it some credit later. Ultimately the message is important and something I wish more people would watch and understand. Its actually weird watching it knowing what happened with Trump and the Capital hill riots.

It wasn’t long ago when I tried to suggest to some people the effort of using Facebook was part of the problem. The very notion of using something else more suitable for the job, rather than a social network which has its own agenda behind it was not a welcomed suggestion.

OkCupid doesn’t like my profile picture?

Ian's Profile picture

What could be wrong with this picture?

I got a email from OKcupid the other day and it took a little while to work out which one was removed from my profile.

Hi,

OkCupid’s photo rules are in place to make OkCupid enjoyable for everyone. We are letting you know that we have removed one or more of your photos that were found to be in violation of these rules.
The most common reasons for a photo being removed are:
1. Your face is not visible, but the photo is in your “profile photos” album
As long as the photo doesn’t break any other rules, feel free to upload it to one of your profile essays instead. We love pet photos!
2. The photo is copyright/ not yours
All photos must be of you or taken by you. No copyright material allowed, including memes.
3. The photo contains erotic content
We ask that you appear in your photos as you would in a normal public context. Because of that, we don’t allow sexy bedroom photos, underwear photos, nudity, erotic poses, etc. Swimwear photos are ok only if they are in public at a beach or pool.
4. The photo contains inappropriate content
We don’t allow advertising, publishing of private information, photos of children alone, or hateful, threatening, or upsetting imagery. Profiles with inappropriate photos may be banned in addition to having the photo removed. See our full photo rules

Please note that if you disregard our photo rules multiple times, it could result in your account being permanently banned.
If you’re shy or concerned about privacy, you might want to check out our Incognito feature, which allows you to only be visible to people you have Liked first.

Thanks, and best of luck on OkCupid!

Sincerly,
OkCupid Support

Okcupid always be selling… They can take their incognito and stick it.

Worst thing is I can’t actually get a proper answer why the computer/algorithm says no. Love to know if its gotten it all wrong and why? I thought about making some changes but don’t fancy my account getting banned.

Great to see nothings really changed…

UK Home Office to scrap ‘racist algorithm’

Black lives matter
Photo by Sushil Nash on Unsplash

I couldn’t help but see the clear connection between a conversation we had on the most recent tech for good live podcast and the UK home office’s not officially announced decision to scrap the algorithm for people applying for UK visas. BBC also reports similar.

The Home Office is to scrap a controversial decision-making algorithm that migrants’ rights campaigners claim created a “hostile environment” for people applying for UK visas.

The “streaming algorithm”, which campaigners have described as racist, has been used since 2015 to process visa applications to the UK. It will be abandoned from Friday, according to a letter from Home Office solicitors seen by the Guardian.

The transcript is online, now (massive thanks to tech for good making these). Ade made such a great point…

The Home Office response was, not only that they knew but that their focus was making the application simple to use, right? So, the overall performance was judged sufficient to deploy, and the home office told the BBC it wanted the process of uploading the passport application photo to be simple.

Simple as in white…?! Seriously!

I’m glad its scrapped but we have to ask serious questions how it even made it out? Is something we talked about in the episode and the absolute responsibility of developers and technologists to call these things out. Passing it off as a MVP isn’t good enough.

As Ethar says…

This does create a two tier dam. Do you think that does create.. Well.. part of that situation? It’s the fact that we technologists build to the greatest value first. In the event where we’ve chosen, we’ve made an explicit choice that white people have the greatest value in that context by doing what we’ve done and said that people of colour don’t matter.

I highly recommend listening to the whole podcast, its well worth your time. As there’s some great thoughts from Vimla and David too. Just listen and enjoy!

https://pod.co/tech-for-good-live/black-lives-matter-special-canaries-in-the-coal-mine-with-ian-forrester

So ironic being a Airbnb superhost again

Airbnb superhostI have no idea why but I was given superhost status again for my spare room which I open to Airbnb. I only raise this because its certainly not something I’m bothered about or will make a great deal of. I find it ironic as I was being told off by the airbnb algorithm not that long ago.

Google apologizes again for bias results

Google once again was in hot water for its algorthim which meant looking up happy families in image search would return results of happy white famalies.

Of course the last time, Google photos classified black people as gorillas.

Some friends have been debating this and suggested it wasn’t so bad, but its clear that after a few days things were tweaked. Of course Google are one of many who rely on non-diverse training data and likely are coding their biases into the code/algorithms. Because of course getting real diverse training data is expensive and time consuming; I guess in the short term so is building a diverse team in their own eyes?

Anyway here’s what I get when searching for happy families on Friday 2nd June about 10pm BST.

logged in google search for happy families
Logged into Google account using Chrome on Ubuntu
incognito search for happy families
Using incognito mode and searching for happy families with Chrome on Ubuntu
Search for happy families using a russian tor and chromium
Search for happy families using a russian tor node on Chromium on Ubuntu

 

Sunspring: A movie written by algorithms

I don’t actually believe Adrian was the first to tell me about Sunspring, but I spent some time watching this morning.

Its certainly not the first time someone has spoke about algorithms and machine learning to create media. But its the first time I’ve actually seen something… well…ummm interesting of sorts?

I wouldn’t say it was hilarious, more weirdly uncomplete. The training material can be eviladanced in what you see but as it jumps around a lot. Its worth watching and I’d be interested in what happens when you got something more clearer and unique? However what I was really wondering is…

screenshot-2016-06-09-123615jpg

Were the camera angles, shots, special effects, music, mood and colour grading also written by the algorithms? Heck was the title? It doesn’t seem like it but who knows. I guess the bigger question is does it even matter? So much of our media is middle of the road and made for the biggest audience, in my own opionion of course. Would it make much difference?

Of course the most interesting ideas are using a combination of machine learning with human direction. But thats for another post…

The press feedback is varied…  best to go check it out for yourself

Metadating in Newcastle

Couple in a coffee shop

Metadating… by Newcastle’s Culture Lab (I must declare I’m working with these guys in BBC R&D’s User eXperience Partnership, but this is nothing to do with me. I was told about it and went wow!)

Exploring the Romance of Personal Data, A singles dating event, hosted by researchers at Culture Lab

Ok you got my interest already… The Quantified Self and Dating?

We’re all creating more data about our lives, be it on social media or on our smartphones. Nowadays, people even use technology to track themselves and record how active they are, where they’ve been or how well they’ve slept. But how public should this data be? What would this look like on a dating profile? Would you like to know how late she works or whether he’s a night owl? Just how much does he workout? Where’s her favourite coffee shop?

Meta Dating is a free singles event for people interested in data and dating, hosted by researchers at Culture Lab, Newcastle University.

We’re looking for single people who have some experience of online dating to take part, meet other singles, have fun, and explore the romance of personal data!

Of  course I signed up straight away… I am a little worried about how they are going to collect all my data  but I’ll worry about that later. One of the questions asked was, why you? To which I roughly replied…

I’m a fan of the Quantified Self and use Online and Offline dating services all the time. I’m also working in the Quantified Self area in regards to the ethics of data and new storytelling experiences. I’ll be really interested to know if theres any link between the data about ourselves and data in whom we seek.

As most of you know, I tend to hold quite strong views about online dating and the process which services claim to use to match people. I pretty much damned most dating sites for doing nothing more than simply bringing people together like Facebook. Shuffled my feet at the idea of using algorithms to match people. And even made jokes about using things like smell to match people. But whats upset me the most is the lack of scientific methodology to solve the problem.

Well here’s my chance to see if there is something to it or its simply a joke like quantified toilets and premium dating. Be fascinating to see how they get over things like looks, interests and things which are just you like race, height, etc, etc… or will the results come back with something similar to the idea of the unquantifiable?

Built in Filter and Algorthm failure

I enjoyed Jon Udell’s thoughts on Filter Failure.

The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.

One of the questions people have when they think about Perceptive Media is the Filter bubble.

filter bubble is a result state in which a website algorithm selectively guesses what information a user would like to see based on information about the user (such as location, past click behaviour and search history) and, as a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. Prime examples are Google‘s personalised search results and Facebook‘s personalised news stream. The term was coined by internet activist Eli Pariser in his book by the same name; according to Pariser, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble.

The filter bubble is still being heavily debated to if its real or not but the idea of filters which get things wrong to add a level of serendipity sounds good. But I do wonder if people will be happy with a level of fuzziness in the algorithms they become dependable on?

I’m always on the lookout for ways to defeat the filters and see things through lenses other than my own. On Facebook, for example, I stay connected to people with whom I profoundly disagree. As a tourist of other people’s echo chambers I gain perspective on my native echo chamber. Facebook doesn’t discourage this tourism, but it doesn’t actively encourage it either.

The way Jon Udell is defeating the filters, he retains some kind of control. Its a nice way to get a balance, but as someone who only follows 200ish people on Twitter and don’t look at Facebook much, I actively like to remove the noise from my bubble.

As I think back on the evolution of social media I recall a few moments when my filters did “fail” in ways that delivered the kinds of surprises I value. Napster was the first. When you found a tune on Napster you could also explore the library of the person who shared that tune. That person had no idea who I was or what I’d like. By way of a tune we randomly shared in common I found many delightful surprises. I don’t have that experience on Pandora today.

Likewise the early blogosophere. I built my echo chamber there by following people whose lenses on the world complemented mine. For us the common thread was Net tech. But anything could and did appear in the feeds we shared directly with one another. Again there were many delightful surprises.

Oh yes I remember spending hours in Easy Everything internet cafes after work or going out checking out users library’s, not really recognizing the name and listening to see if I liked it. Jon may not admit it but I found the dark net provides some very interesting parallels with this. Looking through what else someone shared can be a real delight when you strike upon something unheard of.

And likewise the blogosphere can lead you down some interesting paths. Take my blog for example, some people read it because of my interest in Technology, but the next post may be something to do with dating or life experience.

I do want some filter failure but I want to be in control of when really… And I think thats the point Jon is getting at…

want my filters to fail, and I want dials that control the degrees and kinds of failures.

Where that statement leaves the concept of pure Perceptive Media, who knows…? But its certainly something I’ve been considering for a long while.

Reminds me of that old saying… Its not a bug, its a feature

Is it possible to match people with science?

This has got to be one the eternal questions? Maths or science has solved so many of our questions but can it be used for working out compatibility of humans?

That was one of the things which really intrigued me about a year of making love. I assume you’ve seen how it turned in on its self since the production team totally screwed up the process and kept us all in the dark about it. And if you want further evidence do check out the tweets for #yearofmakinglove and #yoml

However because of the total screwup most people are saying its a total failure (maybe very true) but also science or rather maths was never going to work… I can’t disagree specially after the experience we all had yesterday. However basing any judgments off the back of yesterdays experience would be a mistake.

So do I personally think maths/science can match humans? Maybe… (yes what a cope out) but to be honest no one knows for sure. And thats the point of the experiment.

At the very start of the day (ordeal) we were introduced to the professor who devised the test/questions and the matching algorithm. I remember tweeting this

As Michael replied a far…

And he’s right…

In my own experience to date, the matching algorithm over at OkCupid.com has been pretty darn good (not perfect!) (OKCupid’s OK Trends are legendary – check out the biggest lies people tell each other on dating sites and How race effects the messages you get). But I had to train it to be good. I’ve to date answered about 700+ questions and there not just questions. There detailed, so you have to answer it, then specify how important this is to you and what answer your ideal match would pick. This makes for much more dimensions in the answer criteria and ultimately the algorithm. Aka the algorithm is only as good as the dataset its working on.

You got to put in the data/time, if you want it to be good… Otherwise your going to get crappy results.

This makes the 50 questions answered for the year of making love look like a pop quiz (hotshot), to be honest.

So back to the original question slightly modified, can a algorithm match people in the interest of love? I think so to a certain extent. But its not the complete picture. Chemistry is a big deal which is very difficult to understand. Its not found by answering questions but watching the interaction between people. Its a different type of algorithm… Situation can cause chemistry, aka the reason why everyone came together on the coaches home (or to the wrong city as some of them seemed to do) is because there was a social situation which we could all share/talk about. (cue talk about social objects/places) Chemistry was in full effect?

I hope people don’t give up on science as a way to find their ideal partner just because of the terrible experience they had at The year of making love… is I guess what I’m saying…