In the EU or UK? Opt out of Meta using data to train AI

Facebook form for opting out of data being used to train Meta AI

I heard Meta were planning to change their EULA/Terms of use and planned to opt out in anyway possible. But I was nicely surprised to see not only the EU but UK can opt out using the short form above. Although I think it is cheeky to require a reason…

I assumed the EU would be covered but the UK would be left out. Thank goodness for GDPR

You can learn a lot more by reading this how to guide or watching this video.

Update just 55mins later

I got this email from Facebook/Meda

Hi Ian,

We’ve reviewed your request and will honor your objection. This means your request will be applied going forward.

If you want to learn more about generative AI, and our privacy work in this new space, please review the information we have in Privacy Center.

facebook.com/privacy/genai

This inbox cannot accept incoming messages. If you send us a reply, it won’t be received.
Thanks,
Privacy Operations

Public Service Internet monthly newsletter (Apr 2024)

Back of 2 robots approaching the united nations

We live in incredible times with such possibilities that is clear. Although its easily dismissed with Klarna’s ai chatbot 700 people foot in mouth statement, hearing the unlikely but technically possible Meta VR inception attack but Meta caught snooping on users via a VPN app they bought previously is chilling stuff

To quote Buckminster Fuller “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.

You are seeing aspects of this with the FCC adopting Cyber trust labelling, discussion about norms for wearables and Mozilla’s change in privacy partner.


Revisiting the dark forest filled with Gen AI

Ian thinks: A little while back, the dark forest theory was heavily mention and quoted. Followed up not long afterwards with thoughts about GenAI last year. I have found it useful to re-read it and reflect on where we are now in 2024.

Are you afraid? The race for AI robots

Ian thinks: Watching the race for AI robots, honestly makes me feel slightly defensive. Its harder to work out the real from the hype, and this video helps a lot with this. My defensiveness reminds me of a scene in War of the Worlds and the Animatrix second renaissance. How would you react?

Deep concerns about nostalgia

Ian thinks: I have always had a real problem with nostalgia and this episode of tech won’t save us really speaks about my concerns I see/hear too often.

Ian thinks: Open AI says its impossible but they are wrong, proven by nonprofit Fairly Trained and zero copyright material. Expect many more court cases around this all soon.

How the digital divide looks in the UK post pandemic

Ian thinks: This guardian short video highlights some of the deep dividing issues which are easily forgotten in the forever pace of technology

Anger and disillusionment with Ed Zitron

Ian thinks: I recently subscribed to better offline with Ed. Its refreshing to have a good informative rants about the state of the tech industry, however I found this interview with Paris a lot more constructive.

Retiring the Mozilla’s privacy aware location service

Ian thinks: Its sad news for a privacy service by Mozilla. Most major location services which end up inside of other applications/service, generally track the users. MLS went out of their way to minimise the tracking and now its going away.

Dodds is confused about SOLID, are you too?

Ian thinks: Although I’m less confused by SOLID Its worth reading the comments which include a almost confession. Leaning in on the community

Japan plan to restrict seniors at the cash point?

Ian thinks: When I first read this, I thought about what the UK does in this space. None of them use age, however there is good argument both ways in Japan. Anything to make all people think is a very good thing, when you consider the way these scams work.


Find the  archive here

Google apologizes again for bias results

Google once again was in hot water for its algorthim which meant looking up happy families in image search would return results of happy white famalies.

Of course the last time, Google photos classified black people as gorillas.

Some friends have been debating this and suggested it wasn’t so bad, but its clear that after a few days things were tweaked. Of course Google are one of many who rely on non-diverse training data and likely are coding their biases into the code/algorithms. Because of course getting real diverse training data is expensive and time consuming; I guess in the short term so is building a diverse team in their own eyes?

Anyway here’s what I get when searching for happy families on Friday 2nd June about 10pm BST.

logged in google search for happy families
Logged into Google account using Chrome on Ubuntu
incognito search for happy families
Using incognito mode and searching for happy families with Chrome on Ubuntu
Search for happy families using a russian tor and chromium
Search for happy families using a russian tor node on Chromium on Ubuntu