What are the dangers of artificial intelligence? Will AI extinguish human life? Could AI wipe out civilization as we know it? A crowd of experts assert that it could. You’ve probably read the stories over the past couple of days. An outfit called the Center for AI Safety (a San Francisco-based organization that few know anything about) published a 22 word statement that echoed around the globe. The statement was signed by over 300 notables, including a good number of leading AI specialists. If you missed it, here is the statement, in its entirety:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Pandemics
Does this sound serious to you? It does to me. Pandemics haven’t extinguished us, but they’ve tried. They’ve wiped out millions of people. Covid killed something like 7 million worldwide, but that death rate was relatively puny when compared with the bubonic plague that killed over 20 million Europeans (over a third of the entire population) in the mid-14th Century (see here for details).
Nuclear War
And nuclear war? Well, if you’re aware of the power of the present-day nuclear weapons, you know that one single bomb could destroy any city on earth, and the resulting radiation, which causes radiation sickness, could kill many more. (If you haven’t read Neville Shute’s On The Beach, now is the time.) That novel describes the final days of the world’s remaining survivors after an atomic war. And it was published in 1958, when nuclear arms were primitive compared with today’s killers.)
AI is nothing less than mind-boggling. Its “brain” includes the entire digital world. Its computing capacity has solved equations in seconds that humans would have taken years to solve, if ever. And AI talks with us. It writes articles for us. Siri, a relatively low level form of AI on our iPhones, finds information for us instantly, and reports it to us.
A Big Question
So here’s the big question. Is AI sentient? Or could it become sentient? If you happened to see 2001: A Space Odyssey, you may have a chilling suspicion that computers can become “human” in their thinking. One of the characters in that movie is a large computer, HAL 9000, who speaks with a soothing male voice and has a camera to observe “his” surrounds. He also is proud, egotistical, and dangerous. (By the way, HAL was cleverly named. Just move down the alphabet one letter from H, A, and L, and you will come up with the name of a famed computer company.)
It truly is worth considering that AI, if it is sentient, or if it would become so, with its clear ability to out-think any human brain, has the wherewithal to become our master, or our killer, perhaps via the building of robots to wage war against us. Am I dabbling in science fiction? Maybe. Maybe not. The necessary tools are already in place.
Is Siri Sentient?
At times, I have a feeling that even what Apple calls my private assistant, Siri, is sentient. “She” at times refused to obey my commands. One of her favorite tricks is to refuse to speak to me when I am using her for driving directions. Her map appears on my iPhone, but Siri remains stubbornly silent more often than not, even when I patiently ask her to speak.
So, will artificial intelligence extinguish human life? Not, I believe, in the years I have remaining.
Finally
One final point about that 22 word statement printed above. Please note that the statement does not include global warming among its societal scale risks, a point I made earlier in another context. Click here to see that post.