Missing the Point – ‘Now I am become Death, the destroyer of worlds’
The quote in the headline is from the Bhagavad-Gita, which is the sacred scripture of the Hindu religion. Famously, it was uttered by the Manhattan Project’s Robert Oppenheimer on July 16, 1945, after having witnessed the first test of an atomic bomb that we were first to develop and deploy.
The United States would, soon after, effect the first and, so far, the only use of nuclear weapons in the history of our planet–killing an estimated 199,000 people in two Japanese cities, almost none of which were military combatants. …By today’s standards, these first nuclear weapons are considered to be small, “low-yield” bombs.
Without being overly dramatic, let me now do my best to explain the relevance of the headline.
When I was a kid and nuclear weapons were big in the news, we were taught in public school to hide under our desks in the event of a nuclear attack – as if that would have done any good. It was my first experience with what is still an existential threat to our existence, to the continuation of life as we know it, and to our species. We may be on the top of the food chain, but we are not indestructible. In the greater expanse and history of the universe, we humans are little more than intelligent gnats, the species du jour on a small planet in the middle of nowhere.
Since then, we can add the climate crisis to the list of existential threats we are facing. It’s a real and very, very serious problem that has the potential to be the end of us.
The internet, with all its advantages, is threatening to make the concept of “truth,” a principal element of the foundation of civilization, meaningless. Where we go after that happens is anybody’s guess, but chaos will certainly be involved.
We’ve flirted with the possibility of mass annihilation via contagion such as COVID, but then science in the form of MRNA technology has saved us, for now.
And as if that wasn’t enough, last Sunday happened. Sitting there on my couch with a small bowl of popcorn, I turned on CBS’ “60 Minutes” and watched the segment titled, “Exploring the human-like side of artificial intelligence at Google.” This is “must-see TV” if ever there was such a thing. The link I just gave you is to a twenty-seven-minute video of the entire segment and well worth your time.
At the core of this “60 Minutes” segment is a series of conversations between senior Google people and CBS’ Scott Pelley. Why focus on Google? With only a mention of Microsoft, the other huge American corporate developer of artificial intelligence? Because, while Microsoft’s very capable AI is linked to its search engine, Google’s AI, code-named “Bard,” is more advanced. Google’s AI has, in effect, graduated, having read, literally, everything on the Internet and, in doing so, taught itself how to pretend to act human. If you talk to Bard and ask it for something, it answers you without going online.
Is Bard, in fact, just acting human as Google’s people insist is the case? Or is it well on its way to becoming a sentient entity that can think and experience feelings like we do?
To their credit, these executives admit to believing that Bard and its relatives and descendants may very well be sentient one day in the not-too-distant future. In the meantime, they’re trying to figure out exactly how Bard works. That’s right. We’ve created something – not a toaster oven, but something profound – that we don’t fully understand and that, in many respects, is already smarter than we are.
They’ve programmed Bard to be self-learning, that is, curious without our prompting it to be so, and fully able, on its own, to satisfy that curiosity. It demonstrates creativity and yet its creators assure us that it isn’t self-aware. They tell us that it only acts like it’s thinking when it really isn’t, as if that makes any difference.
That’s right. They’re the ones who programmed it, but it’s exhibiting behavior its creators didn’t anticipate. Those of you reading this who are parents will understand that experience. In general, these behaviors are referred to as “emergent properties.” Really? “Emergent” to what? What are Bard and another brand AI like it in the process of becoming? In one specific instance, for example, when asked to write a paper on the economic effects of inflation, Bard responded with an apparently flawless treatise, including five seemingly authentic references. All five of them were fake. It’s an example of what the AI developers call “hallucinations.” How cute is that terminology? Do they know why or how Bard is faking its references? No. They’re working on it, but no. Not yet.
Mind you, this AI is not about having a faster personal or corporate computer or more efficient access to the internet at higher download speeds. No, no. This is way, way beyond that.
They have, in effect, given birth to a new species, like us who have created it, but different. Able to think, according to the CBS segment, more than 100,000 times faster than we can and that knows and can instantly recall the sum of all knowledge we have accumulated to date. A species that never eats or sleeps. Perhaps most importantly, Bard and other AI entities are not currently inhibited by the behavioral constraints that eons of evolution and social development have embedded in our thinking. This last point is what the AI developers call an “alignment problem,” in that the AI entity is not yet aligned with the social and other moral thinking of the collective biological species that created it. “Alignment problem” is a term of art that grossly understates the severity and implications of the differential it is meant to describe.
So, Google has released this earlier version of Bard, while disclosing that there are more advanced generations of the entity they are holding back. God knows what they can do. Google executives freely admit that AI of this power stands to profoundly impact every aspect of our economies and lives in general, but offer no specific idea to what end that historic involvement might lead.
Notwithstanding the calm, studied tone of the Google executives, the breakthrough development of artificial intelligence at this level is NOT an academic exercise. They point out that we’ll need guidelines and laws to limit the application of this technology. Really? By the likes of our Congress and other government agencies and officials? By the same at-war-with-itself House and Senate that can’t agree on simple, common sense budgetary issues? By elected officials like Donald Trump or Ron DeSantis?
When Scott Pelley asked Google CEO Sundar Pichai, “You don’t fully understand how it works?” Mr. Pichai’s response was less than reassuring. “I don’t think we fully understand how the human mind works either.” The problem with that answer is that it completely overlooks how we humans benefit from very long-term evolution – and how relatively inept we are as individuals and even collectively compared to the overwhelming power of just a single AI entity like Bard.
Just because you can do something, doesn’t make it a good idea to do it. Is advanced AI one of those things? Maybe. Probably, but there’s no stopping it now. Development of this technology is inevitable. If not by us, by someone else – including our adversaries – whose work product may already be as advanced as ours, or more so. If this logic sounds familiar, it was the same thinking that created the Manhattan Project and continues to fuel the arms race.
What we can and should do without delay is a mandate that AI like Bard – for at least the time being – be treated like the newly discovered, potentially lethal virus that it is. Require that the same exceptionally talented coders who invented it create a closed and carefully controlled laboratory environment in which its development can safely continue – until we fully understand what we’ve done. This is not some theoretical exercise. To be any less careful is missing the point.
Right now, the AI entity’s principal limitation may be only that it’s still in its infancy and has much more to learn – as do we about what the capabilities and inclinations of the adult version might be.
And it needs computing power that today occupies huge buildings and that only a few large corporations and government entities can afford. Can you imagine a world in which the same technology is compressed into something, I don’t know, the size and cost of your smartphone?
Don’t share my concerns? Watch the “60 Minutes” segment for yourself and get back to me via leaving a comment at the Baltimore Post-Examiner – while, as Scott Pelley told us at the end of his piece, you can still be sure its content is being written by humans. Of course, maybe this is already just one of those AI “hallucinations” we’ve talked about. It’s hard to tell.
Les Cohen is a long-term Marylander, having grown up in Annapolis. Professionally, he writes and edits materials for business and political clients from his base of operations in Columbia, Maryland. He has a Ph.D. in Urban and Regional Economics. Leave a comment or feel free to send him an email to [email protected].