The Rise of Thinking Machines and the Quiet Surrender of Human Authority
Most Americans have never heard the term “agentic AI,” but a growing number have felt its weight. A job application disappears without explanation. A health insurance claim is denied by a process no adjuster can fully account for. A loan decision arrives in seconds from a system that cannot tell you why it said no. In each case, the person affected is not dealing with a human being exercising judgment — they are dealing with a machine acting on delegated authority, with nobody available to question.
That is the shift accelerating across American life. A few years ago, generative AI captured public attention by producing content with remarkable speed. What is emerging now is fundamentally different — systems that do not wait for a human prompt but initiate, plan, and execute, embedded in business workflows, government agencies, and the decision pipelines that shape real-world outcomes. The question before us is no longer what AI can do. It is who holds authority when the machine is running, and who answers when it is wrong.
From Tool to Actor
The Bank for International Settlements warns that agent-based AI represents a structural transformation of the global economy, not merely an efficiency gain. Simultaneously, U.S. federal agencies are buying AI through commercial cloud providers, and the Pentagon increasingly relies on AI-enabled systems for analysis and operational planning.
In my 2025 book “AI for Mankind’s Future,” I document how artificial intelligence compresses decision timelines to the point where meaningful human review becomes practically impossible. RAND Corporation researchers have warned specifically of “flash wars” — armed conflicts accelerated by machine-speed decisions that outpace the human beings nominally in charge. Speed is not wisdom, and efficiency is not accountability. Those two distinctions may matter more in the coming decade than any weapons system in our arsenal.
The Transfer of Moral Authority
Machines do not bear moral responsibility. They possess no conscience, no capacity for discernment, no awareness of the human cost when they err. Yet societies are increasingly treating their outputs as authoritative — not as inputs to be weighed by human judgment, but as decisions to be accepted. The European Union is debating restrictions on AI-generated content; the United Kingdom is examining labeling requirements to preserve truth and authorship.
These regulatory scrambles reflect a deeper unease: a growing inability to identify who is responsible when an AI-driven decision causes harm. When authority migrates from accountable human beings to autonomous systems, accountability does not simply relocate. It disappears.
A Constitutional Warning
The American system of self-governance rests on a straightforward premise: authority must be traceable to human beings who can be questioned, challenged, and removed. Elected officials can be held to account; their decisions can be debated, reversed, and litigated. Agentic AI introduces a competing model in which consequential decisions are made by systems that cannot be voted out, cross-examined in court, or held morally responsible for what they produce. That is not innovation. It is the working definition of technocracy.
In my forthcoming book “The New AI Cold War,” releasing April 2026, I argue that while authoritarian regimes openly pursue AI-driven social control, democratic societies face a subtler and perhaps more dangerous temptation — surrendering responsibility in exchange for convenience. The threat does not announce itself. It accumulates one automated decision at a time.
A Biblical Warning
Scripture provides the clearest framework for understanding the limits of human invention. Genesis 1:27 establishes that human beings alone bear the image of God — the foundation of both human dignity and human responsibility. No machine carries that image, and no algorithm can substitute for the moral weight it confers. The Tower of Babel (Genesis 11) stands as a precise historical warning: when human systems seek to centralize power and knowledge apart from God, the result is confusion and judgment.
Today’s AI architecture — vast, interconnected, and increasingly autonomous — carries that ancient impulse forward, promising knowledge without wisdom, power without accountability, and control without truth.
The Apostle John’s command to “test the spirits” (1 John 4:1) was written for a world of spiritual deception. This also holds true in an era where intelligent machines can mass-produce convincing falsehoods. Jesus warned that deception would intensify in the last days (Matthew 24:24), and AI amplifies that danger to a scale no previous generation has faced.
The Temptation to Surrender
The greatest risk is not that machines will seize authority. It is that we will hand it over gradually, voluntarily, and without noticing until the transfer is complete. As I argue in “AI for Mankind’s Future,” artificial intelligence is a powerful tool — but the moment it is elevated beyond that role, it becomes a substitute for human judgment and for moral authority itself. When institutions habitually defer to automated systems because they are faster or cheaper, accountability erodes. When individuals accept machine outputs without examination, discernment weakens. Both are forms of surrender.
Resisting that drift requires deliberate effort: no decision of genuine moral consequence delegated to a system that cannot answer for the outcome; every AI-driven action affecting human lives traced back to a human being who can be held accountable; and the discipline of discernment — distinguishing what is efficient from what is right, what is fast from what is true — actively cultivated rather than assumed.
Psalm 20:7 puts the matter plainly: “Some trust in chariots and some in horses, but we trust in the name of the Lord our God.” In our day, the chariots are algorithms, and the horses are autonomous systems, and the temptation to defer to them will be strong. Agentic AI is not a future development — it is already shaping decisions, quietly and rapidly, in institutions most Americans never see. The question before every Christian and every citizen who values accountable governance is whether we will retain the moral courage to remain responsible. That is a choice we still have. But it will not remain open indefinitely.
Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.


