Breakpoint: The promises and perils of artificial intelligence

File Photo / In sci-fi and horror movies, the “mad scientist” rarely begins as a villain.
File Photo / In sci-fi and horror movies, the “mad scientist” rarely begins as a villain.

In sci-fi and horror movies, the "mad scientist" rarely begins as a villain. From Dr. Frankenstein to Spider-Man's Doc Ock, they are often the victims of a combination of good intentions, unstoppable curiosity and more than a little hubris. Their plight is as familiar in real life as on screen, most recently with artificial intelligence.

According to the authors of "The Techno-Optimist Manifesto," who heavily borrowed from fantasy-genre language to predict a high-tech future, "We believe Artificial Intelligence is our alchemy, our Philosopher's Stone — we are literally making sand think. ... We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder."

Ray Kurzweil is a scientist and futurist who for years now has predicted potential advancements in higher tech, as not just a helpful set of tools for humans to use but also as essential to post-human evolution. By hitching our humanity to artificial intelligence, what he calls "the Singularity," Kurzweil prophesies a new age:

"And this Singularity isn't far off," he says. "I set the date for the Singularity — representing a profound and disruptive transformation in human capability — as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today."

Kurzweil sees the Singularity as more than a possibility. He thinks it is a near-absolute inevitability that human intelligence will be equaled, surpassed and eventually merged with our computerized tools.

Though many predictions about AI are still more science fiction than fact, it is advancing faster than many expected. Even Kurzweil, when he was writing in the early 2000s, failed to see the omnipresence of smartphones and social media. Today, it is nearly impossible to identify things produced by programs such as ChatGPT.

For years, Oxford mathematician and devout Christian John Lennox has warned of some of AI's more negative implications. In his book "2084: Artificial Intelligence and the Future of Humanity," Lennox challenged more utopian predictions about AI and highlighted its limits. "A neural network," wrote Lennox, "can pick out a cat on a YouTube video, but it has no concept of what a cat is." Here, Lennox is pointing to a profound limitation of materialism. In fact, only those wedded to the idea that the human mind is merely an organic machine can think that a smart computer is, in any real sense, "alive."

Though AI may never be the golden ticket it's hyped to be, suggested Lennox, its threats to humanity remain. The title intentionally points to George Orwell's dystopian novel "1984." The current situation in China should be enough to reveal that it will not take a fully realized Singularity to enslave millions. It will only take fallen humans with bad ideas and enough power to control some very powerful technologies.

And yet, the promises of AI are amazing. An algorithm can pick out our music, movies and groceries with incredible accuracy, even if it is a bit creepy. The labor- and time-saving potential of AI will save humanity hours of mindless tasks. And we've not even begun to imagine the potential for technical and medical advances.

However, potentials are not actuals, and history is full of the unintended applications and consequences of human technologies. The only way forward in these possible futures is with a clear-eyed perspective on human exceptionalism and human fallenness. We must know the implications of both being created in the image of God and being an heir of Adam's sin.

Adapted from Breakpoint, Feb. 23, 2024; reprinted by permission of the Colson Center,

Upcoming Events