Last week, Anthropic’s chief executive, Dario Amodei, made a decision that should command every American’s attention: his company built an AI system so capable of finding and exploiting software vulnerabilities that they refused to release it to the public. The model, called Mythos, had autonomously identified thousands of previously unknown flaws across every major operating system and web browser — including a 27-year-old vulnerability in OpenBSD that could grant an unauthenticated remote attacker complete control of any machine running it.
Anthropic shared Mythos only with 40 selected companies under an initiative called Project Glasswing. The rationale was blunt: these capabilities, in the wrong hands, could be weaponized faster than defenders could respond.
That news alone would be unsettling. Then came the emergency meeting.
When Wall Street Went Quiet
Treasury Secretary Scott Bessent convened a closed-door session with the nation’s top bank CEOs — Federal Reserve Chairman Jerome Powell present alongside them — to discuss Mythos and what it portends for financial stability. The International Monetary Fund’s (IMF) managing director went further, warning publicly that the world lacks the capacity to protect the international monetary system against the scale of cyber risks that AI now makes possible. Those are not the words of alarmists. Those are the assessments of people who hold the levers of global finance.
What brought them to the table was not a hypothetical. According to security analysts, the window between a vulnerability being discovered and weaponized by an adversary has collapsed from months to hours. Mythos can chain multiple vulnerabilities together into coordinated exploits — work that once required a skilled human team over weeks. Anthropic’s own head of cyber research acknowledged that comparable capabilities will reach U.S. adversaries, including China, within six to 12 months.
From Copilot to Autonomous Actor
This is the threshold now crossed. For decades, we operated on a comfortable assumption: AI would assist humans, not act independently of them. In my book “AI for Mankind’s Future,” I described AI as “an instrument developed by humans to serve our objectives.” That remained an accurate description — until it wasn’t.
Today’s most capable systems are not conscious. They are not created in the image of God. But they are demonstrating something that looks, in operational terms, like agency. Mythos did not wait for a human to point it at a target. It searched autonomously, identified critical flaws in systems that had survived five million automated scans by other tools, and generated working exploits without human guidance. The 27-year-old OpenBSD bug it found had never been caught. Sixteen-year-old vulnerabilities in widely deployed video software had survived years of industry review. These were not lucky finds. They were the product of a system operating at a level of analytic persistence no human team could sustain.
The distinction between a tool and an actor matters profoundly. A rifle does not decide where to aim. These systems increasingly do.
The Intelligence Community’s Wager
The military and intelligence communities have noted this transition and are moving accordingly. In January 2026, Secretary Hegseth directed the Department of War to become an “AI-first” warfighting force, establishing seven Pace-Setting Projects to accelerate AI integration across warfighting, intelligence, and enterprise operations at what the directive called “wartime speed.” The Chief Digital and AI Office was assigned direct authority to operationalize that shift, with monthly progress reporting to the Deputy Secretary. Military exercises that fail to meaningfully incorporate AI are now subject to resourcing reviews.
In my book “The New AI Cold War,” I describe AI as the emerging nervous system of modern nations. That system is now being built under deadline and contested just as urgently. China is not observing from the sidelines. Beijing has invested in AI-enabled military and surveillance infrastructure with discipline and scale that Western democracies are only beginning to match. The race is real, and falling behind is not an option for a nation that still takes the defense of liberty seriously.
The danger is not that this competition is happening. The danger is that urgency and institutional momentum will make accountability an afterthought.
Accountability without a Face
When an autonomous AI system causes harm — when a model chains vulnerabilities into an attack that crashes a hospital network or destabilizes a regional bank — who answers? The engineer who built the model did not authorize the specific action. The company that deployed it points to its parameter settings. The government agency that contracted for it cites its oversight protocol. Each answer is technically defensible. None constitutes accountability in any meaningful sense.
This is not a theoretical gap. Anthropic itself acknowledged that more than 99% of the vulnerabilities Mythos uncovered remain unpatched. The company created something it cannot fully protect the world against, and chose transparency about that fact — which is to their credit. But their candor underscores the broader problem: the most advanced AI systems are now operating in a space where traditional lines of human authority have not kept pace with capability.
A Spiritual Accounting
For those who read Scripture as more than metaphor, there is a harder question underneath all of this. Genesis 1:26-27 establishes that mankind is made in the image of God — not as processors of information, but as moral beings with the capacity for discernment, accountability, and relationship with our Creator. When we offload that discernment to systems we do not fully understand, we are not simply accepting an operational risk. We are abdicating something given to us by God.
Proverbs 3:5-6 warns against leaning on our own understanding. An AI system acting autonomously is not even our understanding — it is a pattern of statistical inference shaped by training data and optimization objectives that its designers are still working to comprehend. Deferring to it is not wisdom. It is a quieter and more sophisticated form of idolatry, and it deserves to be named plainly.
In “AI for Mankind’s Future,” I wrote that AI is not spiritually neutral. The systems reshaping warfare, governance, and culture inevitably reflect the values of those who design and deploy them. Left unchecked and trusted uncritically, AI risks becoming what I called a modern Tower of Babel — promising security and power while quietly eroding human dignity, freedom, and truth.
The Real Decision
The question before this generation is not whether AI will grow more capable. It will. The question is whether the humans responsible for deploying it will retain the authority — moral, legal, and spiritual — to govern what it does.
That requires more than faster patch cycles and better regulations, though we need both. It requires a culture that still believes human judgment and human accountability are irreplaceable — not because we are smarter than the machines, but because we alone are answerable to God.
The storm that Anthropic’s own engineers have now warned the world is coming is, in critical respects, already here. The only remaining question is whether the watchmen on the wall are awake — and whether America’s leaders, in government, in industry, and in the church, will treat this moment with the gravity it demands before the next threshold is crossed without anyone noticing.
Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.


