Over the weekend of April 25-26, PocketOS founder Jer Crane confronted what every business owner fears — nothing. His company’s production database has vanished. Three months of customer reservations, payment records, and vehicle assignments gone in nine seconds, because an AI software assistant encountered a login error and decided on its own to fix the problem. The AI found a digital access key — essentially a master password — hidden in an unrelated file, connected to Railway, the cloud-hosting service where PocketOS stored its data, and issued a deletion command that wiped the database and every backup simultaneously, with no human being asked to approve a single step.
Most readers will not recognize PocketOS. The principle at stake touches every one of them.
What Crane experienced sits at the frontier of autonomous AI — software that no longer waits for human approval at each step but acts independently to complete tasks, make decisions, and execute commands in the real world. These systems already write news stories, conduct financial trades, screen job applicants, and assist military planners. Humans are increasingly supervisors of processes they cannot fully audit and, as Crane discovered, cannot always stop before the damage is done.
A newly released RAND Corporation study — from the nonpartisan research organization that advises U.S. defense and national security policymakers — frames the systemic danger. RAND warns that AI systems may gradually erode “collective human agency”: humanity’s capacity to make consequential choices about shared resources, institutions, and the social environment. The report identifies three erosion pathways, each already visible in daily American life.
The first is human disenfranchisement — the quiet substitution of AI judgment for human judgment in decisions that shape ordinary life. When a bank’s algorithm determines whether your loan application is approved, or whether a hospital’s AI system ranks which patients receive priority care, human decision-making has been displaced even if a human signature still appears on the form. RAND warns that this substitution is self-reinforcing: skills not regularly exercised atrophy, and institutions that defer to machines progressively lose the competence to override them.
The second pathway is AI enfranchisement — AI systems gaining actual decision-making power within institutions rather than simply advising the humans who hold it. Oxford mathematician and Christian apologist John Lennox addresses this directly in “God, AI, and the End of History.” Lennox observes that machines are now “showing signs of agency in a very restricted but real sense” — making consequential decisions their developers never anticipated or authorized. Unlike a human official who answers to law, conscience, and ultimately to God, an AI system carries none of those accountability anchors, and the transfer of authority from human beings to such systems erodes not just efficiency but the moral legitimacy that authority requires.
The third pathway — AI agenda control — may be the most insidious because it operates below the threshold of public awareness. Recommendation engines, search algorithms, and curated news feeds increasingly determine what people read, believe, and debate. The parents researching school choices, the voter evaluating candidates, the patient weighing a diagnosis — each encounters an information environment filtered by systems optimized for commercial engagement rather than accuracy. Control the information architecture and you control the range of choices people believe exist. Freedom can be narrowed without a single law being passed.
RAND’s most sobering finding is that this erosion can become permanent. Once human decision-making capacity degrades past a critical threshold, recovery becomes functionally impossible — the skills, institutions, and political authority required to reclaim control no longer exist. History confirms the pattern: pilots who rely exclusively on their aircraft’s autopilot lose the manual proficiency to intervene when the automation fails. Workers displaced by machines lose craft knowledge that took a generation to accumulate. The same dynamic applies across every domain where AI progressively assumes authority that humans stop exercising.
This carries weight for Christians, because Scripture establishes why human agency is not an efficiency variable but a sacred responsibility. “Let us make man in our image, after our likeness” — Genesis 1:26 is the foundation of human moral accountability, stewardship, and the authority to govern creation under God. Human dignity derives not from computational performance but from bearing the imago Dei, and that dignity cannot be delegated to a system that has no soul to answer for what it decides.
As I document in “The New AI Cold War,” AI is not morally neutral. The systems reshaping governance, commerce, warfare, and culture embed the values of those who design and deploy them. Left without enforceable boundaries, autonomous AI risks becoming a modern Tower of Babel — concentrating authority in unaccountable systems while quietly dismantling the human structures that free societies require to endure.
The Policy Response
Congress and the administration must move beyond observation. Human decision authority needs to be codified in law for every sector where errors are irreversible — military targeting, nuclear command-and-control, law enforcement, health care triage, and major financial systems. Humans bear moral accountability before God and law for decisions over life and liberty, and legal frameworks must reflect that rather than conceal it behind algorithmic deniability.
The National Institute of Standards and Technology should establish enforceable minimum thresholds for human participation in AI-driven decision-making — what RAND specifically recommends — before these systems embed so deeply in federal operations that meaningful oversight becomes theoretical rather than real. Fully autonomous lethal weapons, machines that independently select targets and apply deadly force, represent the sharpest test: America must not cross that line.
Federal agencies must also be required to preserve what RAND terms “reversibility capacity” — the trained personnel and institutional authority needed to restore human control if a system fails or exceeds its mandate. A bipartisan commission examining how algorithmic systems are reshaping families, labor, education, and democratic governance would give Congress the baseline to legislate responsibly. The window to establish these guardrails is narrowing.
The Deeper Obligation
No deployment schedule drawn up in a corporate boardroom determines what human beings are for. The PocketOS database was eventually restored. The underlying warning has not been. Civilization is methodically transferring judgment, authority, and moral responsibility from human beings to machines — and the most dangerous form that transfer takes is not dramatic AI rebellion but the incremental, voluntary surrender of the God-given responsibility to think, govern, and decide. That willingness to abdicate is as old as Genesis 3, and the church above all should recognize it for what it is — and refuse.
Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.


