I’ve been interested in (and increasingly fearful of) artificial intelligence for about the past year or so. I have been interested in Stephen Hawking’s concerns about it. Some time ago I read Fear Index by Robert Harris, and it was chilling. I don’t think I’m just tickling my personal fears by looking into this stuff. My most recent read was Robopocalypse by Daniel H. Wilson.
This novel tells a tale of a worldwide war that arose between machines and humans after a scientist unwittingly releases an artificial intelligence entity. The setting is some unspecified time in the future in which there are multitudes of smart robots, including smart cars, robotic military units, household servants, and – of course – smart “chips” in almost everything else. This artificial intelligence entity escapes the lab and takes over all the machines. It then uses all these machines to pretty much try to eliminate the human race.
It’s not the best story that I’ve read, but for most of the book it was fairly gripping. At times the author is a bit more wordy than is necessary. The characters are developed adequately, but not powerfully. Most of the scenarios are reasonably plausible, but there are some places where you have to stretch your imagination to go along with the story.
The most interesting aspect of this topic is the way it provokes thinking about the learning process. What does it mean to learn? What causes learning? What limits learning? Of course, as a teacher and as a teacher-educator, these questions and issues are at the center of what I do for a living. But even so, they are such profound questions, they are worth pondering.
If we ever do invent machines that are capable of learning, we had better pay attention to putting limitations on them. Unlimited learning potential, on the part of a machine, is truly frightening. The people who say that this is not something to fear assert that there is no reason to believe that machines would ever have any evil intent. But I say, they wouldn’t have to have any evil intent. Simply a lack of a moral imperative would be enough. In other words, the absence of morality would be enough to cause unimaginable harm.
Here’s a simple example to indicate the direction things are likely to proceed. I heard a couple weeks ago that there is now technology available to allow vending machines to employ face-recognition software. The vending machine can connect face-recognition to a database that would tell it whether or not you have health issues (diabetes, obesity, etc) that should dictate limits on your intake of junk food. If the machine decides that you’ve had enough, it won’t permit you to make a purchase. The story I heard is that this is NOW available, and is being piloted in Europe.
Now, carry this line of reasoning to a logical conclusion. Imagine machines that are smart enough to “watch out” for us. Imagine a world in which no one is permitted to engage in any risky or potentially harmful behaviors. At first blush this may seem to some like a utopia. But for those of us who have been brought up to value liberty, this is an absolute nightmare.
I suspect that this is just the beginning. I imagine that a limitless artificial intelligence will determine that the human race, due to our sinful and fallen nature, warrants only extinction. Ultimate power, absent the love of God, will surely result in our annihilation.
I hope I’m wrong.