Artificial Intelligence could spell the end of the human race
A growing number of the world’s most prominent scientists are beginning to voice warnings about artificial intelligence and its likely, or at least possible, threat to humankind.
Stephen Hawking, recipient of a new Artificial Intelligence (AI) enhanced vocal synthesizer, which runs on Intel hardware using new predictive software developed by Swiftkey (the very same makers of the sometimes eerily accurate predictive alternative keyboard used on millions of cellphones), recently wrote in the Huffington Post that AI “could spell the end of the human race.”
Elon Musk, inventor of the Tesla electric car and would-be privately funded astronaut speaking at the MIT Aeronautics department’s Centennial Symposium, likened the creation of AI as “summoning the demon.”
Didn’t Work Out
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence, we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out,” he said.
Clive Sinclair, a noted British inventor, told the BBC, “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive.”
Even Bill Gates, who did so much to lay the groundwork for AI, and has a division consuming a quarter of the attention and funds at Microsoft furiously working on the development of intelligent machines, wrote, “First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Just ten years ago such warnings were being issued by far-seeing scientists like Vernor Vinge, a professor of mathematics and popular science fiction writer at San Diego State University, who wrote a paper called “The Coming Technological Singularity,” a term adopted by Google’s new Chief Scientist, Ray Kurzweil. He posited a different route to AI. Vinge used the hockey stick metaphor, which plotted an axis on a chart illustrating the growth in computing power, which begins at a very gradual incline till it hits the upright of the stick, which then climbs ever steeper till it reaches a straight line up.
At this point, the singularity would occur, though he predicted a rather different path to machine consciousness than those who are building computers to achieve AI. Vinge, who noted that the Internet was structured like the brain as a vast neural net, would simply “wake up” once the singularity was achieved. The “neurons” composing its neural net are every device connected to the internet. Then the newborn machine entity would begin self-optimization, then write code at the speed of light enhancing its intelligence and we humans would be at its mercy.
The others, mentioned above, are starting from scratch, building computers from the ground up capable of achieving AI.
Nearer Than You Know
We are much closer to machine intelligence than is commonly brought about if it gets mentioned at all. After all, people don’t lie awake at night worrying about the potential threat of machine intelligence. But it, and how we would respond to it, are memes increasingly appearing in popular culture. There have been three recent movies about the topic, Her, Chappie, and Ex Machina. The first two depict benevolent conscious machines, the first an operating system, the second a robot. A robot features in Ex Machina as well but is not so kindly disposed to man.
AI systems are already developed and in use today. Tufts University biologists created an AI system that for the first time reversed and engineered the regeneration mechanism of planaria. These are the small worms whose power to regrow body parts has made them a research model in human regenerative medicine.
This discovery represents the first model of regeneration discovered by nonhuman intelligence, a riddle that has eluded human scientists for a century.
A new book by James Barrat is titled, Our Final Invention: Artificial Intelligence and the End of the Human Era. He posits that an AI would likely turn on its creators to obtain limited resources for self-preservation and future growth. A self-aware artificial conscious machine might expand ” its idea of self-preservation to include proactive attacks on future threats,” namely, us. He writes that “without meticulous, countervailing instructions, a self-aware, self-improving, goal-seeking system will go to lengths we’d deem ridiculous to fulfill its goals.” These might include commandeering all the planet’s energy supplies to ensure future growth.
Once we approach the singularity, and intelligent systems begin to misbehave, it’s unlikely we’d just ban further work in this area. As Vinge notes, “passing laws, or having customs, that forbid such things merely assures that someone else will.”
Number One Risk
Microsoft and Google are heavily investing in AI, despite all the recent warnings from dozens of prominent scientists and technology research foundations. Shane Left, the co-founder of DeepMind, an AI company acquired by Google, recently issued his own ominous prediction. “Eventually, I think human extinction will probably occur, and technology will likely play a part in this.” He singled out AI as “the number one risk for this century.”
Google, dead set on being first with a super-intelligent AI, is also taking precautions. When it purchased DeepMind, it was with the stipulation that the work be conducted safely. Google responded by creating an AI ethics board. The board is tasked with considering the uses to which the AI is put, as well as the ethical standards by which it operates. But this is no simple task. It raises questions of what is and is not ethical. Putting smart machine brains in robot soldiers means they must, like their human counterparts, be willing and able to kill. But is it more ethical to put robots in harm’s way than to allow human soldiers to die on the battlefield?
Whose Ethics?
Other tough ethical concerns issues such as AI developed to do jobs that would put millions out of work, or whether facial recognition routines threaten our current freedoms of privacy. Driverless cars, which are much closer to development than most people are aware of, must depend on their AI drivers to make ethical decisions such as whether it is better to crash into a school bus or save the lives of the passengers on board its vehicle.
Another quandary is that one person’s ethics are anathema to another’s. From which group will AI’s ethical subroutines be decided? Academics? Clergy? Whoever is chosen, their decisions will affect the rest of us just as much as the deployment of AI systems in homes and the workplace.
You see that these questions go to the heart of debates humans have been pondering for centuries, often decided by force of arms.
And of course, there’s that old saw that letting the genie out of the bottle is much easier than putting him back in.
Isaac Asimov, the classic science fiction writer whose works often dealt with robots, wrote a code that bound the actions of his super-intelligent, immortal, robots by three rules. ” A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Of course, there’s no guarantee that an AI capable of writing its own code and increasing its own intelligence and behavior would not be able to circumvent these or any other guidelines written into its guiding algorithms. In Asimov’s books, the robots do find ways around these Three Laws.
This is not the first time that scientists proceeded to develop technology that has the potential for the destruction of humanity. When the Manhattan Project, which created the first nuclear bomb, tested its first weapon, a majority of the scientists admitted they had no idea that nuclear fission could be contained to the uranium contained in the bomb. There was a good chance, they worried, that the fission might spread to the air and earth and blow the whole planet to cinders. But a war was on, the US government was racing the Germans to this weapon from hell, and such warnings went unheeded by the War Department and the President.
There is a parallel here. AI could usher in a Golden Age for humanity, or exterminate it. In any event, AI will be increasingly a part of our daily lives, though it poses substantial risks to our existence as a species.
From David Bowie’s song Oh You Pretty Things:
What are we coming to
No room for me, no fun for you
I think about a world to come
Where the books were found by the Golden Ones
Written in pain, written in awe
By a puzzled man who questioned
What we work here for
All the strangers came today
And it looks as if they’re here to stay
Look at your children
See their faces in golden rays
Don’t kid yourself they belong to you
They’re the start of a coming race
The earth is a bitch
We’ve finished our news
Homo sapiens have outgrown their use
Let me make it plain
You gotta make way for the Homo superior
Paul Croke, former newspaper editor and longtime Washington DC area freelance writer, has loved gadgets and consumer electronics since he saw his first Dick Tracy watch. He writes about consumer technology.