Ann Marie Carrothers from Mozilla is absolutely right, its something I have mentioned many times and recently decided enough is enough. Weirdly I have never had the discussion with Ann-Marie in person?
I avoid all dating apps and services which don’t allow me to search my own way through the people. I’m so sick of the systems forcing one way of interacting usually the tinder swiping.
For example OKCupid on the mobile app won’t allow you to search for people who use geeks in there profile. I can hear people say, “why on earth would you want this?!”
I’m personally not interested in generic people, I’m after unique people.
Instead of searching through millions of profiles, why not cut through noise by finding someone who cares enough to add it to their profile? For example geek with my other filters in the website (like gender, age, distance, etc) got down to two women.
My search for feminism got down to one woman.
Its not for everyone but thats fine, because the notion of swiping left and right looking at profile pictures isn’t for everybody either.
Ian thinks: Douglas Rushkoff’s monologue about gamestop needs a listen for a different view, but stay around for the interview with Yaël, previous head of political advertising at Facebook. She tells all and I like the approach of trying to fix it before criticising.
Ian thinks: Shareting is when parents share their kids photos and private information without their consent. Its become a real problem now the millennials are growing up with a digital footprint without knowing.
Ian thinks: Hearing about the absolute mess over news in Australia, its easy to point fingers. But its important to look deeper at whats really happening for the sake of profits not people. I’m with Shoshana Zuboff and others, but I know many people get their news from these massive corps.
Ian thinks: The Uber case is great news but in a similar legal play to Facebook & Google with Australia, there might be more going on that most are reporting? We got to look a little deeper as monopoly is Uber’s end game.
Ian thinks: This is a devious way to force a take-down of a live stream or any recorded footage. Theres got to be a better way and I think its related to using alternative platforms or self hosting with syndication.
Ian thinks: I like this summary of so many of the problems with Facebook, but it misses the important point of centralisation. It also highlights Noam Cohen’s quote “Mark Zuckerberg is deluded by his own faith in Facebook’s ability to be a force for good in the world”
Ian thinks: Mariana is on fire and this summary of work around the BBC puts value under a microscope. I love this line “Value is not just the income generated at the end of the innovation chain–– it is also the creative input at the upstream end, the vital investment in talent, content creation, digital innovation and R&D at the early stages”
Ian thinks: Mozilla’s well researched look at the state of the internet is a one of those reports which spurs thought and action for the coming year. Its been a tricky year with lots of up and downs, nicely documented in this massively detailed report/playbook. You might recognise someone in the report.
Ian thinks: The post has quite a few errors within it, like how they keep referring to Mastodon as a single network and missed the memo how Gab removed themselves off Mastodon. BlueSky sounds only slightly interesting, but the core of this post is focused around the risk of extreme groups using decentralised technology.
Ian thinks: Although this well written paper focuses on public service broadcasting, I would consider the wider question of publicservice full stop. Its clear the likes of Uber, Airbnb, Amazon, Facebook etc are aiming to replace public utilities Of course I think so but publicservice needs to double down on things which break silicon valley
Ian thinks: Well its about time, but expect more E2E and Zero-knowledge buzz words to be thrown around this year. Question will always be, are they actually doing what they say they are? Looking at you Zoom.
I was watching the NGI Policy Summit last week and it was good. Lots to take away but I found What your face reveals – the story of HowNormalAmI.eu. Stuck out as one of the highlights.
Dutch media artist Tijmen Schep will launch his latest work – an online interactive documentary that judges you through your webcam, and explains how face detection algorithms are invisibly pervading our lives. Can we really asses someone’s beauty, BMI or even life expectancy from just a photo of their face? After experiencing his creation, we’ll dive into the ‘making of’ and emerge with a better understanding of what face detection AI can – and cannot – do.
If you haven’t seen it, give it a try.
But I found the social media responses really interesting. It seems half the people are talking and sharing their data, while the other half are talking about the details. People can’t help themselves and compare the details although they know its bias.
I don’t buy it… and feel like I should try again with a slightly different picture for reference. I was looking forward to reporting them to the ICO, although they never followed up on my houseparty complaint.
Stealing Ur Feelings is an augmented reality experience that reveals how your favorite apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize American democracy. Using the AI techniques described in corporate patents, Stealing Ur Feelings learns your deepest secrets just by analyzing your face.
Uber is now requiring the same good behavior from riders that it has long expected from its drivers. Uber riders have always had ratings, but they were never really at risk of deactivation — until now. Starting today, riders in the U.S. and Canada are now at risk of deactivation if their rating falls significantly below a city’s average.
“Respect is a two-way street, and so is accountability,” Uber Head of Safety Brand and Initiatives Kate Parker wrote in a blog post. “Drivers have long been required to meet a minimum rating threshold which can vary city to city. While we expect only a small number of riders to ultimately be impacted by ratings-based deactivations, it’s the right thing to do.”
For drivers, they face a risk of deactivation if they fall below 4.6, according to leaked documents from 2015. Though, average ratings are city-specific. Uber, however, is not disclosing the average rider rating, but says “any rider at risk of losing access will receive several notifications and opportunities to improve his or her rating,” an Uber spokesperson told TechCrunch.
Airbnb is still telling me off/trying to help with my score of 4.8/5 with 34 Total reviews and 76% 5 star reviews.
Mainly because I don’t accept most people into my flat. There’s no understanding about timing, workload, etc. In the algorithms view, everyone should be maximizing the amount of people using the flat. They keep trying to push auto-booking on me. I expect it will become a requirement one day and I’ll leave Airbnb as its completely unsuitable for me.
Thank you for your prompt response. We confirm that we have deleted from the DiF dataset all the URLs linked to your Flickr ID and associated annotations. We have also deleted your Flickr ID from our records. IBM will require our research partners to comply with your deletion request and provide IBM with confirmation of compliance.
IBM Research DiF team
End of the matter, although part of me wants to contact everybody in the photos and tell them what happened. Not sure what that would achieve however?
Then I got a further 2 replies from IBM. One of them is IBM asking if I want my GDPR data for everything regarding IBM? But the 2nd one is from IBM Diversity in faces project.
Thank you for your response and for providing your Flickr ID. We located 207 URLs in the DiF dataset that are associated with your Flickr ID. Per your request, the list of the 207 URLs is attached to this email (in the file called urls_it.txt). The URLs link to public Flickr images.
For clarity, the DiF dataset is a research initiative, and not a commercial application and it does not contain the images themselves, but URLs such as the ones in the attachment.
Let us know if you would like us to remove these URLs and associated annotations from the DiF dataset. If so, we will confirm when this process has been completed and your Flickr ID has been removed from our records.
Facial recognition can log you into your iPhone, track criminals through crowds and identify loyal customers in stores.
The technology — which is imperfect but improving rapidly — is based on algorithms that learn how to recognize human faces and the hundreds of ways in which each one is unique.
To do this well, the algorithms must be fed hundreds of thousands of images of a diverse array of faces. Increasingly, those photos are coming from the internet, where they’re swept up by the millions without the knowledge of the people who posted them, categorized by age, gender, skin tone and dozens of other metrics, and shared with researchers at universities and companies.
When I first heard about this story I was annoyed but didn’t think too much about it. Then later down the story, its clear they used creative commons Flickr photos.
“This is the dirty little secret of AI training sets. Researchers often just grab whatever images are available in the wild,” said NYU School of Law professor Jason Schultz.
The latest company to enter this territory was IBM, which in January released a collection of nearly a million photos that were taken from the photo hosting site Flickr and coded to describe the subjects’ appearance. IBM promoted the collection to researchers as a progressive step toward reducing bias in facial recognition.
But some of the photographers whose images were included in IBM’s dataset were surprised and disconcerted when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms. (NBC News obtained IBM’s dataset from a source after the company declined to share it, saying it could be used only by academic or corporate research groups.)
And then there is a checker to see if your photos were used in the teaching of machines. After typing my username, I found out I have 207 photo(s) in the IBM dataset. This is one of them:
Georg Holzer, uploaded his photos to Flickr to remember great moments with his family and friends, and he used Creative Commons licenses to allow nonprofits and artists to use his photos for free. He did not expect more than 700 of his images to be swept up to study facial recognition technology.
“I know about the harm such a technology can cause,” he said over Skype, after NBC News told him his photos were in IBM’s dataset. “Of course, you can never forget about the good uses of image recognition such as finding family pictures faster, but it can also be used to restrict fundamental rights and privacy. I can never approve or accept the widespread use of such a technology.”
I have a similar view to Georg, I publish almost all my flickr photos under a creative commons non-commercial sharealike licence. I swear this has been broken. I’m also not sure if the pictures are all private or not. But I’m going to find out thanks to GDPR
There may, however, be legal recourse in some jurisdictions thanks to the rise of privacy laws acknowledging the unique value of photos of people’s faces. Under Europe’s General Data Protection Regulation, photos are considered “sensitive personal information” if they are used to confirm an individual’s identity. Residents of Europe who don’t want their data included can ask IBM to delete it. If IBM doesn’t comply, they can complain to their country’s data protection authority, which, if the particular photos fall under the definition of “sensitive personal information,” can levy fines against companies that violate the law.
Expect a GDPR request soon IBM! Anything I can do to send a message I wasn’t happy with this.
First time was from Gregor Žavcer at MyData 2018 in Helsinki. I remember when he started saying if you have no control over your identity you are but a slave (power-phased of course). There was a bit of awe from the audience, including myself. Now to be fair he justified everything he said but I didn’t make note of the references he made, as he was moving quite quickly. I did note down something about no autonomy is data without self.
This looks incredible as we shift closer to the Dweb (I’m thinking there was web 1.0, then web 2.0 and now Dweb, as web 3.0/semantic web didn’t quite take root). There are many questions including service/application support and the difficulty of getting one. This certainly where I agree with Aral about the design of this all, the advantages could be so great but if it takes extremely good technical knowledge to get one, then its going to be stuck on the ground for a long time, regardless of the critical advantages.
I already wrote about TOA Berlin and the different satellite events I also took part in. I remember how tired I was getting to Berlin late and then being on stage early doors with the multiple changes on public transport, I should have just taken a cab really.
No idea what was up with my voice, but it certainly sounds a little odd.
Anyhow lots of interesting ideas were bunched into the slide deck, and certainly caused a number of long conversations afterwards.
Some friends have been debating this and suggested it wasn’t so bad, but its clear that after a few days things were tweaked. Of course Google are one of many who rely on non-diverse training data and likely are coding their biases into the code/algorithms. Because of course getting real diverse training data is expensive and time consuming; I guess in the short term so is building a diverse team in their own eyes?
Anyway here’s what I get when searching for happy families on Friday 2nd June about 10pm BST.