“Artificial intelligence is going to transform the lives of people around the world.”
That was the take from President Joe Biden in a briefing focused on AI and how “the Biden-Harris administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”
Exactly one week before that briefing, Elon Musk announced a new venture called xAI, a company whose stated goal is “to understand the true nature of the universe.” A few days later, rumors emerged that Apple is working on its own “Apple GPT” AI chatbot. And the very same week, news broke that Google was testing a new AI that wrote news articles. If you’re one of the few who’ve missed out on this tidal wave of AI news in the past days, have no fear. Vice President Kamala Harris has summed it up for you quite succinctly: “AI is kind of a fancy thing. First of all, it’s two letters …”
All fanciness aside, if you’re struggling to keep up, you’re not alone. This is what it’s like to write about artificial intelligence. The moment your fingers type the words, the news is old. Perhaps this is in part why Google is working to have AI do all the ephemeral dirty work of newswriting. At any rate, it seems artificial intelligence is being pitched as a cure-all panacea for almost any ailment — even the ailments of the great unknown.
Musk wants artificial intelligence to tackle the bigger questions of the universe, and like most everyone trumpeting AI, he values safety. Musk thinks, “the safest way to build an AI is actually to make one that is maximally curious and truth seeking.” In a Twitter Spaces event announcing the new xAI company, Musk pondered the question:
“So does one ever actually get fully to the truth? It’s not clear, but one should always aspire to that and try to minimize the error between what you know — what you think is true — and what is actually true. That’s my sort of theory behind the maximally curious, maximally truthful as being probably the safest approach. I think to a superintelligence humanity is much more interesting than not humanity.”
Interesting as we humans may be, it’s “not humanity” on display here, and for xAI, there’s much work for the not humanity to do. Musk said, “there’s a lot of unresolved questions that are very, extremely fundamental.” During his launch event, he mentioned dark matter and the existence (or nonexistence) of extraterrestrial life as two of these fundamental questions. He also acknowledged that people can still think more efficiently than machines:
“I think about … neural network networks today. It’s currently the case that if you have ten megawatts of GPUs, [AI] cannot currently write a better novel than a good human. And good humans using roughly 10 watts of higher order brain power. So not counting the basic stuff to, you know, operate your body. So then we’ve got six order of magnitude difference. […] even with six orders of magnitude, you still cannot beat a smart human writing a novel.”
For Musk, this is not simply a moment of pausing to ponder the magnificence of the human brain. It’s a problem for xAI to solve. From this perspective, it’s easy to see the concern over safety. Will we build AI that will take our jobs, our usefulness, and perhaps our lives? Will we, with our own intelligence, render ourselves irrelevant?
To be clear, I do believe we should exercise extreme caution with AI. However, call me crazy — or unintelligent, but I still hold out hope for humanity, depraved as we are. In the book of Exodus, Moses spent a span of time on the mountain receiving instruction from Yahweh about the covenant, including specific ways in which to keep it. Toward the end of the discourse, when it’s become abundantly clear that Moses will need assistance in executing the covenantal details, the Lord offers him assistance:
“See, I have called by name Bezalel the son of Uri, son of Hur, of the tribe of Judah, and I have filled him with the Spirit of God, with ability and intelligence, with knowledge and all craftsmanship, to devise artistic designs, to work in gold, silver, and bronze, in cutting stones for setting, and in carving wood, to work in every craft” (Exodus 31:2–5, ESV).
When the Lord wanted help building a tabernacle for his people to explore the mysteries of his covenant, he didn’t create an AI assistant, but a person, filled with the Spirit of God, with ability and intelligence to facilitate his work. Even for God, it seems, humanity is much more interesting than not humanity.
And it is to this humanity that God has already revealed great mystery. Consider Paul’s letter to the Ephesians:
“To me, though I am the very least of all the saints, this grace was given, to preach to the Gentiles the unsearchable riches of Christ, and to bring to light for everyone what is the plan of the mystery hidden for ages in God, who created all things, so that through the church the manifold wisdom of God might now be made known to the rulers and authorities in the heavenly places. This was according to the eternal purpose that he has realized in Christ Jesus our Lord…” (Ephesians 3:8–11, ESV).
Will artificial intelligence transform the lives of people around the world, as Biden suggested? It’s likely it will have great effect. But when we begin looking to artificial intelligence to solve the mysteries of the universe, we’re looking in the wrong direction. In the end, I suspect we’ll find artificial intelligence to be a poor panacea. It won’t solve what we want it to solve, and it will likely raise only more questions.
There is no solving of a mystery with a distinctly spiritual aspect by an intelligence that has no spirit. When it comes to probing the depths of universal mysteries, even for what is found by AI, we’ll be left to simple old humanity to read the receipts. And the receipts may indeed be interesting. But when it comes to mysteries, reading God’s revelation beats machine receipt reading every time.
Jared Bridges is editor-in-chief of The Washington Stand.