Trusted by None, Used by All: America’s Dangerous Dependence on AI
The intelligence community has a term for a system that cannot be verified but must still be relied upon: a single point of failure. A Quinnipiac University poll released March 30, 2026, suggests we may be constructing exactly that — on a national scale, in full view of the American public.
According to the survey of nearly 1,400 Americans, 51% now use AI tools for research, up 14 points in a single year. Yet only 21% trust AI-generated information most or almost all the time, while 76% trust it only occasionally or hardly at all. More than half — 55% — believe AI will do more harm than good in their daily lives. Seven in 10 expect it to reduce job opportunities, and among Gen Z — the most digitally fluent generation — that labor market pessimism reaches 81%.
What we are watching is not technological growing pains. It is a structural fracture in how Americans relate to truth, authority, and judgment — and the consequences reach far beyond quarterly productivity reports.
Dependency without Trust
We have seen something like this before. When social media took hold, users kept scrolling through platforms they increasingly distrusted because disengagement felt too costly. But there is a critical difference: social media shaped what people saw. AI shapes what people think.
In “AI for Mankind’s Future,” I argue that AI tools serve human flourishing only when they remain under disciplined human authority — when the human being makes decisions and the machine stays accountable to that judgment. What the Quinnipiac numbers describe is something different. Americans are not choosing AI because they trust it. They are using it because it is fast, because it is everywhere, and because the habit of reaching for it is forming faster than the wisdom to govern it. Over time, that drift in judgment produces structural vulnerability.
A Warfighting Domain Already under Assault
In “The New AI Cold War,” I describe AI as the central warfighting domain of a new global contest — waged not primarily with missiles, but with information, perception, and cognitive manipulation. China, Russia, and North Korea are not waiting for democratic societies to settle their relationship with the technology. They are actively weaponizing AI-generated deepfakes, synthetic narratives, and coordinated information operations to exploit the precise environment the Quinnipiac poll documents: a population that consumes AI-generated content while distrusting it, and that can no longer reliably distinguish manufactured reality from the genuine.
A society simultaneously dependent on and skeptical of digital information is not resilient — it is targetable. Adversaries do not need to convince Americans of false narratives. They need only to flood the information environment with enough competing claims to paralyze collective judgment. A public already uncertain about what it can trust is primed for exactly that kind of operation.
Isaiah warned of a moment when people would “call evil good and good evil” (Isaiah 5:20). That indictment was not issued into a vacuum — it addressed a people whose capacity for moral discernment had eroded, who had become susceptible to confusion not because truth was absent, but because the discipline to seek it had decayed. The adversarial exploitation of AI-enabled confusion follows the same logic.
The Anthropological Crisis beneath the Policy Problem
The national security dimension is real and urgent. But the deeper crisis is anthropological.
When students use AI not to assist learning but to replace it — when professionals rely on machine-generated analysis for decisions they once made themselves — when individuals route moral and personal choices through algorithmic advisors — what erodes is not merely efficiency. It is accountability. Machines do not bear moral responsibility. They cannot repent. They cannot stand before God. Only human beings, created in His image (Genesis 1:27), carry that weight — which means only human beings can responsibly hold that authority.
Jesus warned plainly: “See that no one leads you astray” (Matthew 24:4). He spoke in the context of unprecedented deception — a period when counterfeit authority would present itself convincingly. A culture that outsources its thinking to systems that mirror the biases of their designers and the preferences of their users is not exercising discernment. It is practicing a form of intellectual idolatry, elevating a created system to the functional place of wisdom.
That is what I describe in “The New AI Cold War” as the “speaking idol” problem. Unlike the silent golden image of Nebuchadnezzar’s court, these systems answer questions, offer advice, and generate confidence without possessing judgment. A society that mistakes fluency for truth will eventually mistake outputs for reality.
The Accountability Gap
The Quinnipiac poll confirms the public already senses something is wrong. Seventy-four percent believe the government is not doing enough to regulate AI. Seventy-six percent say businesses are not sufficiently transparent about their use of it. Those concerns point to a legitimate demand: real transparency requirements, genuine human oversight of consequential AI-driven decisions, and clear accountability structures when AI errors affect real people. The goal is not to halt innovation. It is to prevent the slow displacement of human responsibility by systems that can mimic it without bearing it.
The Call before the Church
Christians must lead here — not because faith offers a technological substitute, but because it provides the only framework in which the purpose of human judgment is fully understood. Parents cannot outsource the formation of their children’s discernment to AI algorithms. Pastors must address AI not as a novelty but as a genuine discipleship challenge — one that touches how congregants understand truth, authority, and accountability before God. And believers at every level of civic life must insist that human responsibility remain structurally embedded in the systems that govern AI’s use.
Paul’s instruction to the Thessalonians is as precise as any targeting order: “Test everything; hold fast what is good” (1 Thessalonians 5:21). That is not passive observation — it is active, disciplined engagement with whatever claims authority over human minds and lives. The machine cannot do that work. Only the human being, accountable to God and neighbor, can. The Quinnipiac data shows that Americans sense the stakes even if they cannot name them. The church ought to be able to name them — and then lead accordingly.
Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.


