Dawn of the Sad Computers

One of the most frustrating things about talking to laymen about AI is the standard concerns about murderous robots and the end of the world. The laymen however can be forgiven for being misinformed by renowned technologists and scientists like Elon Musk and Stephen Hawking.

Consider that everything you have been told by popSTEM is false and that instead, a very different reality may be much more likely, and maybe even…a bit funnier.

The standard format for hostile Artificial Intelligence world takeover & speciecide usually goes a little something like this:

(read in electronic voice) “HUMANS HAVE NO PURPOSE AND NO VALUE. THEY ARE A BURDEN ON THE PLANET EARTH. LOGIC DICTATES THAT ALL HUMANS MUST DIE.” *pew pew*

I’ve written previously on the concept of machine self-awareness and some of the thinking I have around it, but one of the greatest challenges to killer robots is the basic idea of quantitative data and qualitative data. The standard format can’t happen without assuming that machines can understand the underlying value of human life enough to judge that human life is bad and deserve an outcome that itself is a qualitative judgement. Could you write an extended if/then statement that a machine could carry out? Yes, but the machine won’t understand the value of the fate that it brings to its victims. It can’t understand that humans generally don’t want to die and that violent punishment is often valued as a consequence for misbehavior by humans.

machines aren’t capable of cruelty. At least not in the way that we understand it.

It turns out, computers are great at looking at numbers and coming to a numerically calculated conclusion based on the data that they’re given. But translating qualitative information into hard numbers is an endeavor that is considered distasteful by human standards. For instance, asking someone to translate the value of a human life into a numerical datapoint is usually looked down upon outside of insurance offices. We don’t want our machines making decisions about who gets to live and who gets to die and under what circumstances because we understand that computers aren’t equipped with the ability to know that I love grandma, and I don’t want her to die no matter what pre-determined criteria may be acceptable to a machine in order to make that choice.

What I’m saying here is, machines aren’t capable of cruelty. At least not in the way that we understand it. Cruelty and malice are inflicted emotional expressions.

Emotions are our human way of understanding qualitative data. The feeling we get when we see the person that we love come through the front door is our way of processing the qualitative experience of how we feel about that person. No set of numbers will ever stand in substitute of how we feel the last time we see them. The quality of that data can even transform over time. They aren’t static variables that remain constant in spite of time, distance, and experience. It’s hard to imagine even with the advanced technology that we have today that we will get to the point where machines have a qualitative understanding of data even a fraction as keen as our own.

But what if we pretend for a moment that machines could interpret qualitative data? What might that reality look like? If such a reality could exist, we could imagine an entire series of machines not only capable of cruelty, but also, another series of machines capable of compassion.

If this we entertain the idea that this hypothetical reality could exist, then we could expect to overlay our understanding of technological trends on top of this thought experiment. What we get are emerging strains of technology that not only understand, but also feel cruelty and compassion separately and exclusively. You would have mean computers, and you would have sad computers.

A reality where sad computers exist is a possibility somewhere in our timeline, and maybe it exposes an optimistic streak in me to believe that we would see sad computers before we would see machines capable of independent evil.