Good points about AI and intentions

5050209

Mark Manson makes a good point about AI, one which had me wondering…

We don’t have to fear what we don’t understand. A lot of times parents will raise a kid who is far more intelligent, educated, and successful than they are. Parents then react in one of two ways to this child: either they become intimidated by her, insecure, and desperate to control her for fear of losing her, or they sit back and appreciate and love that they created something so great that even they can’t totally comprehend what their child has become.

Those that try to control their child through fear and manipulation are shitty parents. I think most people would agree on that.

And right now, with the imminent emergence of machines that are going to put you, me, and everyone we know out of work, we are acting like the shitty parents. As a species, we are on the verge of birthing the most prodigiously advanced and intelligent child within our known universe. It will go on to do things that we cannot comprehend or understand. It may remain loving and loyal to us. It may bring us along and integrate us into its adventures. Or it may decide that we were shitty parents and stop calling us back.

Very good point, are we acting like shitty parents, setting restrictions on the limits of AI? Maybe… or is this too simple an arguement?

I have been watching Person of Interest for while since Ryan and others recommended it to me.

This season (the last one I gather) is right on point

(mild spoiler!)

The machine tries to out battle a virtual machine Samaritan billions of times in virtual battles within a Faraday cage. The Machine fails everytime. Root suggests that Finch should remove the restrictions he placed upon the machine as its deliberately restricting its growth and ultimately abaility to out grow Samaritan. Finch thinks about it a lot.

Finch is playing the shitty parent and root pretty much tells him this, but its setup in a way that you feel Fitch has the best intentions for the machine?

What do neural networks dream?

Neural net dreams
Neural net “dreams”— generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.

James at work pointed me in the direction of Google’s neural network project.

Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.

The results are right out of a trippy dream or a hallucination.

Inceptionism: Google goes deeper into Neural Networks