Cybernetic Explanation, Purpose and AI:

In today’s post, I am following the theme of cybernetic explanation that I talked about in my last post – The Monkey’s Prose – Cybernetic Explanation. I recently listened to the talks given as part of the Tenth International Conference on Complex Systems. I really enjoyed the keynote speech by the Herb. A. Simon award winner, Melanie Mitchell. She told the story of a project that her student did where the AI was able to recognize whether there was an animal in a picture or not with good accuracy. Her student dug deep into the AI’s model. The AI is taught to identify a characteristic by showing a large number of datasets (in this case pictures with and without animals). The AI is shown which picture has an animal and which picture does not. The AI comes up with an algorithm based on the large dataset.  The correct answers reinforce the algorithm, and the wrong answers tweaks the algorithm as needed with the assigned weights to the “incorrectness”. This is very much like how we learn. What Mitchell’s student found was that the AI is assigning probabilities based on whether the background is blurry or not. When the background is blurry, it is more likely that there is an animal in the picture. In other words, it is not looking for an animal, it is just looking to see whether the background is blurry or not. Depending upon the statistical probability, the AI will answer that there is or there is not an animal in the picture.

We, humans, assign the meaning to the AI’s output, and believe that the AI is able to differentiate whether there is an animal in the picture or not. In actuality, the AI is merely using statistical probabilities of whether the background is blurry or not. We cannot help but assign meanings to things. We say that nature has a purpose, or that evolution has a purpose. We assign causality to phenomenon. It is interesting to think about whether it truly matters that the AI is not really identifying the animal in the picture. The outcome still has the appearance that the AI is able to tell whether there is an animal or not in the picture. We are able to bring in more concepts that the AI cannot. Mitchell discusses the difference between concepts and perceptual categories. What the AI is doing is constructing perceptual categories that are limited in nature, whereas what we construct are concepts that may be linked to other concepts. The example that Mitchell provided was that of a bridge. For us, a bridge can mean many things based on the linguistic application. We can say that a person is able to “bridge the gap” or that our nose has a bridge. The capacity for AI, at this time at least, is to stick to the bridge being a perceptual category based on the context of the data it has. We can talk in metaphors that the AI cannot understand. A bridge can be a concept or an actual physical thing for us. For a simple task such as the question of an animal in the picture carries no risk. However, as we up the ante to a task such as autonomous driving, we can no longer rely on the appearances of the AI being able to carry out the task. This is demonstrated in the morality or ethics debate with regards to AI, and how it should carry out probability calculations in the event of a hazard. This involves questions such as the ones in the trolley problem.

This also leads to another idea that has the cybernetic explanation embedded in it. This is the idea of “do no harm”. The requirement is not specifically to do good deeds, but to not do things that will cause harm to others. As the English philosopher, John Stuart Mill put it:

That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

 This is also what Isaac Asimov referred to as the first of the three laws of robotics in his 1942 short story, Runaround:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The other two laws are circularly referenced to the first law:

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The idea of cybernetic explanation gives us another perspective to purpose and meaning. Our natural disposition is to assign meaning and purpose, as I indicated earlier. We tend to believe that Truth is out there or that there is an objective reality. As the great Cybernetician Heinz von Foerster put it – “The environment contains no information; the environment is as it is”. Truth or descriptions of reality is our creation with our vocabulary. And most importantly, there are other beings describing realities with their vocabularies as well. I will finish with some wise words from Friedrich Nietzsche.

“It is we alone who have devised cause, sequence, for-each-other, relativity, constraint, number, law, freedom, motive, and purpose; and when we project and mix this symbol world into things as if it existed ‘in itself’, we act once more as we have always acted—mythologically.”

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was The Monkey’s Prose – Cybernetic Explanation:

3 thoughts on “Cybernetic Explanation, Purpose and AI:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s