". . . and having done all . . . stand firm." Eph. 6:13

Commentary

Modern Shibboleths: The Difference between AI and Reality

May 31, 2023

“Where’s the beef?”

One of my favorite TV commercials of all time was the Wendy’s ad where an irate elderly woman asked that question after she was served a substandard hamburger by a Wendy’s competitor. That was 1984. In 2023, she might well ask the question, “Where’s the human?”

When I heard that Wendy’s would be testing artificial intelligence in several restaurants’ drive-thru’s, the first thought that went through my mind was that they may actually get my order right for a change. Straining to hear garbled speech through the little speaker and having to repeat myself multiple times has never endeared me to the fast-food drive-thru, especially when the resulting food — even if they do get your order correct — is likely something I’ll later regret. Indeed, fast food may be a good application for AI, but it’s not the only one. If you haven’t kept up with the technology explosion that’s affecting almost everything, there’s likely an AI application that can help you with even that.

The Industrial Revolution showed us that great advances in technology also come with great apprehension. In the early 19th century, workers fearing replacement by machines went to great lengths to subvert their would-be competition, sometimes destroying machines like in the Luddite movement in England. While today’s AI technology is far beyond the mechanized loom, anxiety is at a similar level. OpenAI CEO Sam Altman (whose company runs the popular ChatGPT AI application) recently testified before a Senate Judiciary subcommittee and called for government regulation, saying, “My worst fear is we cause significant harm to the world.”

Altman isn’t alone in his concerns. He, along with more than 350 other AI scientists and notable figures, signed a statement released by Center for AI Safety that says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

While humanity’s potential extinction may be a more distant worry, one of the more immediate concerns is being able to differentiate between the real and the artificial. Former British head of cyber security Ciaran Martin recently told British media, “AI is now making it much easier to fake things … So having a sense of what is true and reliable, it’s going to become much more difficult. And that’s something that risks undermining the fabric of our society.”

While this tearing of the societal fabric may not be yet widespread, it’s already happening in pockets. In March, the Federal Trade Commission issued a consumer alert warning against what it calls “family emergency scams.” In these scams, criminals obtain voice recordings from the family members of targets and use AI to clone the voices. A victim might receive a phone call from what sounds like a grandchild who says they’re in trouble. Wanting to help their grandkid, the victim sends funds to the scammer.

This is where it starts to get scary. Only the coldest of hearts wouldn’t be moved by the voice of a loved one pleading for help. With the emergence of AI voice cloning and deepfake video, it seems the times when we’ll ask, “Is this real?” will only increase. What, then, do we do? Does everything become an object of suspicion?

Back in the unintelligent days of my teenage years, before the advent of caller ID, I would often prank-call my own grandparents, pretending to be a telemarketer offering them an insane deal on whatever I could think of in the moment. I would disguise my voice, and as they tried to politely decline my offers, I would play the hard-sell salesman. I’d try to see how long it took for them to hang up on me. It was usually less than a minute. But after they hung up — even if they didn’t catch me in the act, they always had a moment of recognition, and I’d get a call back. They knew. They always knew.

And we’d better know as well. But how? In the biblical book of Judges, there’s an ancient scene that may be instructive for our times:

“And the Gileadites captured the fords of the Jordan against the Ephraimites. And when any of the fugitives of Ephraim said, ‘Let me go over,’ the men of Gilead said to him, ‘Are you an Ephraimite?’ When he said, ‘No,’ they said to him, ‘Then say Shibboleth,’ and he said, ‘Sibboleth,’ for he could not pronounce it right. Then they seized him and slaughtered him at the fords of the Jordan. At that time 42,000 of the Ephraimites fell.” (Judges 12:5–6, ESV)

The Ephraimites apparently had a particular speech pattern that impeded their pronunciation of the “sh” in “Shibboleth.” The Gileadites discovered this idiosyncrasy and exploited it. No frauds crossing the Jordan if they couldn’t speak correctly. A modern-day equivalent would be the word “Appalachian.” If you’re from the region, you know it’s pronounced app-uh-LATCH-an. If you’re not, you say it “app-uh-LAY-chan,” like the heathens.

After uploading my own voice to one AI app (for science, y’all ...), I was impressed at the quality of how it reproduced the sound. I was not impressed at how it somehow thought my East Tennessean accent was less refined than it really is. I typed John 3:16 into the AI, and it read it back to me in my own voice, but with a British accent. Score one for humanity’s Shibboleth.

Perhaps my experiment was a fluke — after all, artificial intelligence is indeed getting better by the minute. Still, I’m convinced that as good as it may get, it still won’t capture completely the marvelous wonder and disaster that is human intelligence. AI may have lossless learning and the full internet at its beck and call, but it likely won’t decide to start vaping, shoot a microwave full of Tannerite, or wear a sports coat over a T-shirt. Only a human can possess artificial stupidity.

There’s an old sermon illustration (I’ve probably heard a dozen preachers in my life use it, so it must be true …) that goes something like this: “The way the Secret Service trains its officers to identify counterfeit currency is for them to know the real currency so well that any counterfeit would stand out from the forgery.” I have no idea if that’s really how the Secret Service does it, but I do think it is the way forward in distinguishing humanity from AI.

Where’s the beef? Humans aren’t simply intelligent (or unintelligent) beings. They have bodies and take up real physical space. They have to eat, they have to sleep, and they get cranky if someone merges in front of them and then slows down. Even if AI robots can mimic some of these behaviors, the infinite depth of a real person made in God’s image will be the telltale sign that differentiates man from machine. Ultimately, it may mean we have to shake the actual hand of a person that’s in front of us to determine if he or she isn’t a chatbot. That’s where the beef is, and that’s where we’ll find the real human as well.

Jared Bridges is editor-in-chief of The Washington Stand.