Public Service Internet monthly newsletter (Sept 2022)

a group of people walking down a street next to tall buildings, cyberpunk art by Ji Sheng, cgsociety, afrofuturism, concept art HQ
a group of people walking down a street next to tall buildings, cyberpunk art by Ji Sheng, cgsociety, afrofuturism, concept art HQ – via Midjourney

We live in incredible times with such possibilities that is clear. Although its easily dismissed seeing the ring door bell show, twitter not taking security seriously and Android stalkerware with a flaw affecting millions.

To quote Buckminster Fuller “You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.

You are seeing aspects of this with some cameras which can optically not see objects and people. Facebook messager pushed into deploying some-kind of encryption and Chokepoint capitalism look very well timed indeed.


1.5 million people avoided ransomware

Ian thinks: Ransomware is awful and is such a big problem. Interpol and others decided to do something about it, to encourage victims from paying out. The 1.5 million victims helped in a short time is impressive

Side by side, the differences between AI image generators

Ian thinks: Over the last few months, the AI image generation world has gone in overdrive. I found this comparison really intriguing although the story of midjourney speaks volumes.

The privacy and security problems of frictionless design

Ian thinks:: What Tiktok is doing is deeply worrying but it raises the bigger question of usability to avoid user agency and data rights.

Terraform: Stories from the future?

Ian thinks: I’m not usually a reader of Sci-Fi but now Black Mirror is cancelled, I am looking out for the audiobook of this book. Interesting short stories about the future we are slowly walking towards.

Could we ever trust robots?

Ian thinks: This talk from the Thinking Digital Conference in Newcastle, made me chuckle but highlights a lot of the problems with the future dreams of robots around the home. Its worth checking out the rest of the conference videos too.

In machines we trust?

Ian thinks: MIT’s podcast about the automation of everything is a good listen. Well thought out and I’m looking forward to the next season in this ongoing question about trust and machines.

The future is bright for open podcasting

Ian thinks: I am still fascinated and still impressed the podcasting industry is holding tight against the larger players. Innovating together and for the benefit of all, a great example of the public focused future.

What can be learned from Netflix’s downturn?

Ian thinks: Everyone has been beating up on Netflix recently, but I found this summary sensible, logical and raises questions about the multipliers of tech companies.

Have you ever considered the term social warming?

Ian thinks: For a long time, I have thought about a term which sums up the downsides of social media/networking. In the book Social Warming: The dangerous and polarising effects of social media, I feel Charles Arthur has found the perfect term.


Find the archive here

Good points about AI and intentions

5050209

Mark Manson makes a good point about AI, one which had me wondering…

We don’t have to fear what we don’t understand. A lot of times parents will raise a kid who is far more intelligent, educated, and successful than they are. Parents then react in one of two ways to this child: either they become intimidated by her, insecure, and desperate to control her for fear of losing her, or they sit back and appreciate and love that they created something so great that even they can’t totally comprehend what their child has become.

Those that try to control their child through fear and manipulation are shitty parents. I think most people would agree on that.

And right now, with the imminent emergence of machines that are going to put you, me, and everyone we know out of work, we are acting like the shitty parents. As a species, we are on the verge of birthing the most prodigiously advanced and intelligent child within our known universe. It will go on to do things that we cannot comprehend or understand. It may remain loving and loyal to us. It may bring us along and integrate us into its adventures. Or it may decide that we were shitty parents and stop calling us back.

Very good point, are we acting like shitty parents, setting restrictions on the limits of AI? Maybe… or is this too simple an arguement?

I have been watching Person of Interest for while since Ryan and others recommended it to me.

This season (the last one I gather) is right on point

(mild spoiler!)

The machine tries to out battle a virtual machine Samaritan billions of times in virtual battles within a Faraday cage. The Machine fails everytime. Root suggests that Finch should remove the restrictions he placed upon the machine as its deliberately restricting its growth and ultimately abaility to out grow Samaritan. Finch thinks about it a lot.

Finch is playing the shitty parent and root pretty much tells him this, but its setup in a way that you feel Fitch has the best intentions for the machine?

What do neural networks dream?

Neural net dreams
Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

James at work pointed me in the direction of Google’s neural network project.

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

The results are right out of a trippy dream or a hallucination.

Inceptionism: Google goes deeper into Neural Networks