IBM Dif project returns the full list of photos scraped without consent

Then I got a further 2 replies from IBM. One of them is IBM asking if I want my GDPR data for everything regarding IBM? But the 2nd one is from IBM Diversity in faces project.
Thank you for your response and for providing your Flickr ID. We located 207 URLs in the DiF dataset that are associated with your Flickr ID. Per your request, the list of the 207 URLs is attached to this email (in the file called urls_it.txt). The URLs link to public Flickr images.
For clarity, the DiF dataset is a research initiative, and not a commercial application and it does not contain the images themselves, but URLs such as the ones in the attachment.
Let us know if you would like us to remove these URLs and associated annotations from the DiF dataset. If so, we will confirm when this process has been completed and your Flickr ID has been removed from our records.
Best regards,
IBM Research DiF Team

So I looked up how to Wget all the pictures from the text file they supplied. and downloaded the lot, so I can get a sense of which photos were public/private and if the licence was a conflict. I think hiding behind the notion of the link is a little cheeky but theres so much discussion about hyperlinking to material.

Most of the photos are indeed public but there are a few which I can’t find in a public image search. If they are private, then somethings wrong and I’ll be beating IBM over the head with it.

Facial recognition’s ‘dirty little secret’: Millions of online photos scraped without consent

By Olivia Solon

Facial recognition can log you into your iPhone, track criminals through crowds and identify loyal customers in stores.

The technology — which is imperfect but improving rapidly — is based on algorithms that learn how to recognize human faces and the hundreds of ways in which each one is unique.

To do this well, the algorithms must be fed hundreds of thousands of images of a diverse array of faces. Increasingly, those photos are coming from the internet, where they’re swept up by the millions without the knowledge of the people who posted them, categorized by age, gender, skin tone and dozens of other metrics, and shared with researchers at universities and companies.

When I first heard about this story I was annoyed but didn’t think too much about it. Then later down the story, its clear they used creative commons Flickr photos.

“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz.

The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.

But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms. (NBC News obtained IBM’s dataset from a source after the company declined to share it, saying it could be used only by academic or corporate research groups.)

And then there is a checker to see if your photos were used in the teaching of machines. After typing my username, I found out I have 207 photo(s) in the IBM dataset. This is one of them:

Not my choice of photo, just the one which comes up when using the website

Georg Holzer, uploaded his photos to Flickr to remember great moments with his family and friends, and he used Creative Commons licenses to allow nonprofits and artists to use his photos for free. He did not expect more than 700 of his images to be swept up to study facial recognition technology.

“I know about the harm such a technology can cause,” he said over Skype, after NBC News told him his photos were in IBM’s dataset. “Of course, you can never forget about the good uses of image recognition such as finding family pictures faster, but it can also be used to restrict fundamental rights and privacy. I can never approve or accept the widespread use of such a technology.”

I have a similar view to Georg, I publish almost all my flickr photos under a creative commons non-commercial sharealike licence. I swear this has been broken. I’m also not sure if the pictures are all private or not. But I’m going to find out thanks to GDPR

There may, however, be legal recourse in some jurisdictions thanks to the rise of privacy laws acknowledging the unique value of photos of people’s faces. Under Europe’s General Data Protection Regulation, photos are considered “sensitive personal information” if they are used to confirm an individual’s identity. Residents of Europe who don’t want their data included can ask IBM to delete it. If IBM doesn’t comply, they can complain to their country’s data protection authority, which, if the particular photos fall under the definition of “sensitive personal information,” can levy fines against companies that violate the law.

Expect a GDPR request soon IBM! Anything I can do to send a message I wasn’t happy with this.

If you have no control over your identity you are but a slave?

How self sovereign identity could work

Its twice I heard something similar to this now.

First time was from Gregor Žavcer at MyData 2018 in Helsinki. I remember when he started saying if you have no control over your identity you are but a slave (power-phased of course). There was a bit of awe from the audience, including myself. Now to be fair he justified everything he said but I didn’t make note of the references he made, as he was moving quite quickly. I did note down something about no autonomy is data without self.

Then today at the BBC Blueroom AI Society & the Media event, I heard Konstantinos Karachalios say something very similar. To be fair I was unsure of the whole analogy when I first heard it but there seems to be some solid grounding for this all.

This is why the very solution of a self sovereign identity (SSI) as proposed by Kaliya Young and others during Mydata speaks volume to us all deep down. The videos, notes from that session are not up yet but I gather it was all recorded and will be up soon. However I found her slides from when she talked at the decentralized web summit.

This looks incredible as we shift closer to the Dweb (I’m thinking there was web 1.0, then web 2.0 and now Dweb, as web 3.0/semantic web didn’t quite take root). There are many questions including service/application support and the difficulty of getting one. This certainly where I agree with Aral about the design of this all, the advantages could be so great but if it takes extremely good technical knowledge to get one, then its going to be stuck on the ground for a long time, regardless of the critical advantages.

I was reminded of the sad tale of what happened to Open ID, really hoping this doesn’t go the same way.

Human & AI Powered Creativity in Storytelling from TOA Berlin 2017

I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.

No idea what was up with my voice, but it certainly sounds a little odd.

Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.

Google apologizes again for bias results

Google once again was in hot water for its algorthim which meant looking up happy families in image search would return results of happy white famalies.

Of course the last time, Google photos classified black people as gorillas.

Some friends have been debating this and suggested it wasn’t so bad, but its clear that after a few days things were tweaked. Of course Google are one of many who rely on non-diverse training data and likely are coding their biases into the code/algorithms. Because of course getting real diverse training data is expensive and time consuming; I guess in the short term so is building a diverse team in their own eyes?

Anyway here’s what I get when searching for happy families on Friday 2nd June about 10pm BST.

logged in google search for happy families
Logged into Google account using Chrome on Ubuntu
incognito search for happy families
Using incognito mode and searching for happy families with Chrome on Ubuntu
Search for happy families using a russian tor and chromium
Search for happy families using a russian tor node on Chromium on Ubuntu

 

A little assistance please?

https://twitter.com/slackhq/status/596830290754084864

Everybody on slack recently got a message from Slack about using Slack bots for reminders, to-do lists, etc. It’s a small thing but interesting to see more and more of the thoughts in the famous article Tim Burners-Lee wrote in Scientific America (so popular it actually costs money to read it!) about the Semantic web. (The closes we’ve got to that reality is Google now, which is highly propitery of course.)

It also reminds me of Matt’s post about bots being like plants. which I mentioned previously.

Theres been a long running task on my todo list to take advantage of telegram bots in leui of jabber/xmpp bots, it’s hardly surprising as they are very useful and who wouldn’t turn down some assistance now and there?

Good points about AI and intentions

5050209

Mark Manson makes a good point about AI, one which had me wondering…

We don’t have to fear what we don’t understand. A lot of times parents will raise a kid who is far more intelligent, educated, and successful than they are. Parents then react in one of two ways to this child: either they become intimidated by her, insecure, and desperate to control her for fear of losing her, or they sit back and appreciate and love that they created something so great that even they can’t totally comprehend what their child has become.

Those that try to control their child through fear and manipulation are shitty parents. I think most people would agree on that.

And right now, with the imminent emergence of machines that are going to put you, me, and everyone we know out of work, we are acting like the shitty parents. As a species, we are on the verge of birthing the most prodigiously advanced and intelligent child within our known universe. It will go on to do things that we cannot comprehend or understand. It may remain loving and loyal to us. It may bring us along and integrate us into its adventures. Or it may decide that we were shitty parents and stop calling us back.

Very good point, are we acting like shitty parents, setting restrictions on the limits of AI? Maybe… or is this too simple an arguement?

I have been watching Person of Interest for while since Ryan and others recommended it to me.

This season (the last one I gather) is right on point

(mild spoiler!)

The machine tries to out battle a virtual machine Samaritan billions of times in virtual battles within a Faraday cage. The Machine fails everytime. Root suggests that Finch should remove the restrictions he placed upon the machine as its deliberately restricting its growth and ultimately abaility to out grow Samaritan. Finch thinks about it a lot.

Finch is playing the shitty parent and root pretty much tells him this, but its setup in a way that you feel Fitch has the best intentions for the machine?

Alien intelligence like plant intelligence?

Future Everything 2016

I never got around to writing about the Future Everything conference which is a shame because it was another good conference with plenty of interesting topics and conversation. I really should share my mindmap which is full of interesting thoughts and ideas I picked up while listening to the various sessions.

In the intelligence section Darius Kazemi talked about the bots he creates and how they deliberately don’t have human characteristics. He then raised the question of what is intelligence which is always fascinating (I could spend a whole post just about that alone) but he then pleaded that we should stop trying to humanise them, referring to them as alien intelleigence.

When we are building artificial intelligences, whether they’re corporations or recurrent neural networks, we are building alien intelligences.

There was a bunch of good points like how can we programme them to be human if we don’t really know ourselves? There was also a really good discussion about the ethics, responsibility and diversity of the creator and what is created. This was explored much further by Lydia Nicholas and her work into ethical frameworks for data use.

But I found it interesting to read Matt Locke’s post pretty much saying a similar thing. AI like plants?

…I’m here to talk about a network of conversations that we can’t hear. The garden around us — blossoming fruit trees, thick borders, and fresh cut lawns — is also communicating, an ecosystem sharing information and competing for resources using a grammar and vocabulary that is completely alien to us. Wright thinks we can learn from the way plants talk to design better networks of bots — the intelligent agents that are being hyped as the way we’ll communicate with our tech ecosystems in the future. Instead of building bots like Apple’s Siri and Amazon’s Alexa in our likeness, he believes the answer might be to stop trying to make bots behave like humans altogether.

It’s also interesting the parallels between Darius’s comment about not really knowing whats going on inside the complex neuronetworks we are generating and Matt talking with Tim about the science of plants communicate and it wasn’t till recently we could understand how this actually worked.

Both are worth listening and reading, then consider the parallels.

What do neural networks dream?

Neural net dreams
Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

James at work pointed me in the direction of Google’s neural network project.

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

The results are right out of a trippy dream or a hallucination.

Inceptionism: Google goes deeper into Neural Networks

Rich, educated and waiting for the Singularity?

Sharing love and happiness makes life more beautiful (CC)

I’m going to link two very different issues together because somewhere in my mind they are somewhat connected. Maybe its a single thread but my post on Single Black Male (which finally got posted yeh! all the previous posts about it have been updated) and watching the film Transcendence.

Warning this is not a review of Transcendence but it may contain lots of spoilers, slotted within the post.

It kind of starts with Imran who tends to be lately the creative spark for my writings on this blog

I do agree with rushkoff’s anti-human stance, there’s a messianic collapsitarianism around singularity geeks that actually reveals more about current anxieties than any insight about evolution.

To which I wrote…

Agreed about the singularity 180k yrs ago. Funny how others disagree and say it trivialities the concept of singularity. Rushkoff is on the money, somehow this ties into my pointers at the lack of diversity. Can’t quite close the loop but there’s something about the anxieties of a certain group of people

What is up with Transcendence? Well its not a bad movie, I gave it 6/10 which is better than most but it wasn’t great. The movie starts with the Depp dying from a radioactive bullet and his wife deciding to upload him to his computer, to keep a version of him alive. The bullet is shot by a group of terrorist/freedom fighters who believe Artificial Intelligence is anti-human and anti-god.

The setup is a bounce back and forth about the limits of technology, and I have to say the film does make some good points on both sides. Even myself was questioning some of the moves by Depp. The biggest move was getting connected to the Internet. This was a little underplayed but the significance was all there. Once online, the market multiplication was in full effect and AI Depp is able to get enough funding to buy a town in the middle of nowhere. Fill it with supercomputers and independent solar arrays. Before long the AI Depp builds controllable nano-bots which can repair almost anything it would seem.

Black Hornet Nano Helicopter UAV

And thats where I felt it jumped the shark!

Before long everything which the bots touched was repairable and under the control of AI Depp. Meanwhile even AI Depp’s wife is freaking when AI Depp takes over a worker and performs the cross over to Flesh bag. There was no mention of Skynet or Robots (say hello to the robots), but heck there might as well have been. Humans controlled by a higher AI? Yes you got it, Bingo… The Zero sum game is locked in place and before you know it there is explosions and people are dying (and being repaired with Nanos)

The only way to end it all is a virus uploaded…  Afterwards the world is ridden of this era of technology.

Balls!

Name It #23

But the first hour wasn’t bad. It was hollywood and about the level of Lawnmower man.

I heading back to my original post about the film. Rushkoff is very right.

The singularity to me is this self loathing, anti-human, zombie apocalypse fantasy. The story they tell is the history of evolution is information its self striving to greater states of complexity. Humans are really good, Culture is really good,  been good for the last 10,000 years but now computers are better. People are only any good to help machines transcendence the next stage of evolution.

And now the leap and connecting my post from Single Black Male.

This self loathing, anti-human, zombie apocalypse fantasy, I fear might be coming from the lack of diversity in the tech circle. I’m going to go out on a limn and suggest the zombie fantasy may be something which a more diverse and mixed bunch of people wouldn’t come up with.

They may see things in a different light and actually transcend the zero-sum and fear driven ideas of skynet. Building safe guards which are not about conquer, control and simulate, but towards a cooperative arrangement not based on fear.

2nd renaissance part 2

It reminds me of the Matrix and The Second Renaissance.

One servant bot, B-166ER, overheard its owner planning for it to be scrapped: Not wanting to die, it killed the man in self-defense. Put on trial, B-166ER was found guilty of murder and sentenced to be destroyed, along with the rest of his kind. The trial ignited debate worldwide over Machine rights, and the mandate for termination sparked outbreaks of protests and violence.

The Machines eventually separated from humanity and founded a new city of their own: 01. Due to their technical prowess they took on most of the world’s manufacturing business – 01’s power rose massively, while humanity’s began to drop like a stone. The Machines requested admission to the United Nations, presenting plans for a stable, civil relationship with the nations of man. Their admission was denied, and 01 was subjected to a prolonged barrage of nuclear weapons.

I’m suggesting a more diverse group of people would have thought things through better. So ultimately I’m saying diversity of ideas, thoughts and people is critical for the continuation of human kind…

Ok its quite a leap but I really think its important to look at the bigger picture, its too easy to get caught up in the smaller picture… Maybe over time it will be come easier to explain or become self evident.

Computers are the new ecstasy: Transcendence

Since Imran pointed me at the trailer transcendence, ever since I’ve been thinking about the singularity.

If you don’t know what the singularity (I’m talking about the traditional sense, if there is such a thing. It gets used in many different ways)

The singularity, is a theoretical moment in time when artificial intelligence will have progressed to the point of a greater-than-human intelligence, radically changing civilization, and perhaps human nature

Interestingly I started watching Brain Jazz with Jason Silva and Douglas Rushkoff. (Shame it doesn’t work with the Chromecast). Maybe if I was at home, I could get it working with XBMC/Plex somehow. Anyway, its interesting to watch Silva and Rushkoff wax lyrical about the singularity.

Silva loves the idea of the Singularity while Rushkoff is less interested. (7:30mins in Rushkoff roughly says – remember I’m rubbish at note taking being dyslexic)

“…We talk alot about the singularity, I get the… The singularity to me is this self loathing, anti-human, zombie apocalypse fantasy. The story they tell is the history of evolution is information its self striving to greater states of complexity. Humans are really good, Culture is really good,  been good for the last 10,000 years but now computers are better. People are only any good to help machines transcendence the next stage of evolution…”

Silva nicely replies pointing out that this isn’t a zero sum game. We may have a bias towards skinbags but AI is actual fact us if we can get over the skinbag bias.

And thats my problem with Transcendence. Its all Zero-sum, theres no room for the humans (say hello to the robots indeed), I’m deeply worried about this being clique central although (I can’t believe I’m saying this) Johnny Depp has been talking about the philosophical underpinning of the film. For me having Christopher Nolan on-board is a massive plus, but he’s not actually directing or writing it.

I guess we’ll find out in April which way it goes…

Back to Rushkoff and Silva’s mindjam.

Rushkoff point through out, is people are not learning from there experiences and then bringing it back into reality. So they create Second Life instead of changing First Life. Life should be lived to the point of tears Silva says and Rushkoff agrees, adding we’ve lost the Awe in life, this is why people seek physiological drugs/highs. But the best part of such a high is the come down, the realisation that our world is full of awe.

This is something I can relate to. I have never taken physiological drugs. Even while being surrounded by them at raves and clubs in the 90′s. I always said my life is so full of stimulation and awe, I don’t need to fill it with even more awe.

Theres plenty of great ideas and questions in the session including Rushkoff’s deconstruction of our current social networks. Lovely look at human nature and the young kids trading sweets in a traditional bazaar.

Maybe we need a Brain Jazz in Manchester? This level of conversation is something I do miss.

Could a robot take care of us when were old?

Robot & Frank

Watched Robot & Frank… and thought about the elderly care crisis.

A delightful dramatic comedy, a buddy picture, and, for good measure, a heist film. Curmudgeonly old Frank lives by himself. His routine involves daily visits to his local library, where he has a twinkle in his eye for the librarian. His grown children are concerned about their father’s well-being and buy him a caretaker robot. Initially resistant to the idea, Frank soon appreciates the benefits of robotic support – like nutritious meals and a clean house – and eventually begins to treat his robot like a true companion. With his robot’s assistance, Frank’s passion for his old, unlawful profession is reignited, for better or worse.

Its certainly something you might prefer to watch at home than in the cinema but its a really lovely story… And reminds me of something I saw a while ago on Wired.co.uk about how the ageing population could be the key to domestic robots.

Also got me remembering the only real contact I’ve had with domestic robots. Although the Pleo autospy was slightly distressing to see.