Recently there has been a bit of talk about Artificial Intelligence (AI) bringing about the final end of human kind. Luminaries such as Stephen Hawking, Steve Wozniak, Bill Gates, and Elon Musk suggest that man will soon invent what is called “Strong AI” and it will be the end of humankind.
I personally don’t believe it.
First of all, this is all too much like the techno-driven nightmares of the twentieth century. With the industrial revolution having fully taken hold, it seemed man was always inventing better ways to kill his own kind and that inevitably the tools of total human destruction would be in hand. That threshold was likely reached with the thermonuclear device, which, in quantity, can conceivably render the planet uninhabitable. Waiting in the wings were new terrors as well, such as biologically engineered disease, pollution-driven global warming, and a host other ways to compromise the biosphere we depend on for life.
On a personal level, I’d place bets on these more proven forms of deadly technologies before I’m going to worry about my iPhone turning on me. After all, the devastating power of thermonuclear weapons can go largely unquestioned with the demonstrations of their power throughout the 50’s, 60’s and 70’s. Couple with very real if theoretically demonstrated concept of nuclear winter and other devastating after effects of a mass nuclear exchange, it simply seems a more plausible way for the world to come to end. Bioengineering, also has a distinctly higher plausibility, since we’ve experienced the devastating effects of plague as a species. The Black Death left such an emotional scar on our race, we still sing about it in nursery rhymes hundreds of years later: “Ring around rosy, a pocket full of poesy…” recalls the fear of the first signs of the plague and the idea that fresh flowers in your pocket might ward off its terrible effects.
Artificial Intelligence on the other hand seems to be a much more feeble threat. For example, today we are surrounded by natural animals with varying degrees of intelligence, yet as an apex predator we feel little threat. And today, Artificial Intelligence is seldom credited with being much smarter than a mouse. The run-away effect that AI might somehow get away from us, expand its own consciousness, and throttle the life from our species seems strange if not all together implausible. Modern computer architectures as staked out by Alan Turning and John Von Neumann are ultimately deterministic. In other words, the do what they are told. Make no mistake, we can wreak a lot of havoc on ourselves by mis-programming a computer; however, the computer is only doing what we told it to do. There is no creative spark, no “will to make Evil” that a deterministic device can create of its own volition. It can only inflict on us what we will it to do. In a sense, we should fear our kind more than a soulless collection of electronic components.
The next barrier to Strong AI threatening us is whether it is even possible with our current understanding of how human consciousness works and how the machines we make might attempt to emulate it. At the end of the day, the question becomes, can a machine of our devising, with our current understanding of electronics ever become conscious?
The key here is that I believe human consciousness implies a sense of free will and the ability to exercise that free will. There are good number of reasons to believe this is true, rather than consciousness and free will merely being an epiphenomenon of electrochemical reactions in the cells of our brains. If we were to assume that human consciousness lacks free will, it would imply a host of new problems around good & evil, crime & punishment, altruism and many other distinctly human behaviors. These issues are probably worthy of a philosophical essay in its own right, but when it comes to AI destroying human kind, it is probably axiomatic that it is acting upon some form of consciousness and will.
And this becomes the problem for computing machines. Free will by definition must incorporate a non-deterministic behavior, or it isn’t free will, merely the obedience to the will of the programmer as defined in its algorithms and supplied data. The mathematician, Roger Penrose, often refers to this exercise of non-deterministic free will as intuition. He points to Gödel’s Incompleteness Theorem to suggest that there must be an incompleteness in our understanding of consciousness as it would relate to computers and that this incompleteness cannot be solved in a purely algorithmic way. This essentially throws into doubt that Turing machine implemented on a Von Neumann architecture can ever really be conscious. Essentially Penrose’s argument, best framed in The Emperors New Mind (1989), suggest that a conscious computer, self-aware, or even “spiritual” is, at present, quite unlikely. (Penrose does make allowances that advancements in quantum theory might ultimately unlock some of the mystery of conscious, but to date his theories remain speculative.)
So although some the super-rich technologists entertain the press with prophesies of doom, for me it looks the old villains are still more frightening. It is a re-telling of Frankenstein with transistors. If the end comes by our own hand, I suspect the hangman will still have a human face.