". . . and having done all . . . stand firm." Eph. 6:13

Newsletter

The News You Need

Subscribe to The Washington Stand

X
Article banner image
Print Icon
Commentary

When AI Starts Picking the Locks

April 22, 2026

This past February, IBM released its annual threat report with findings that should have stopped every family, business owner, and civic leader cold. Attacks targeting publicly accessible computer systems jumped 44% in a single year — driven by AI tools that help criminals identify and exploit weaknesses faster than defenders can respond.

This is no longer a warning about the future. It is the present reality in every home, business, and institution that depends on a computer or a phone.

Cybersecurity has always been a contest between offense and defense, with defenders holding a workable edge: even when attackers found a gap, there was time to patch, reinforce, and adapt. That assumption is collapsing.

AI-driven tools can now comb through digital environments, identify weaknesses, simulate attack strategies, and keep trying — automatically, continuously — until they find a way through. The defenders, meanwhile, are still largely operating systems built to catch threats they already recognize, not intelligent agents they have never encountered.

In cybersecurity, speed is often the margin between detection and damage.

More Than a Patch Problem

It is tempting to reduce this to an engineering problem. Better software, larger budgets, faster updates — those investments matter. But they do not reach the root of the problem.

Technology has always magnified human intent — for good and for ill. What AI does is remove the skill barrier that once separated ambition from capability. A person intent on disruption no longer needs specialized expertise. Increasingly, they need only access. No security system alone addresses that reality.

Scripture is characteristically clear about where this pressure originates. In Mark 7:21-22, Jesus identifies the source plainly: “From within, out of the heart of man, come evil thoughts… theft, murder, adultery…” The methods of exploitation change with every generation of technology. The motive that drives them does not.

Artificial intelligence does not generate the inclination to cause harm. It lowers the cost of acting on it.

A Fragile Infrastructure

The stakes extend well beyond stolen passwords or embarrassed corporations. The Cybersecurity and Infrastructure Security Agency has warned that adversaries are already using AI to multiply and accelerate attacks against the systems modern life depends on — banking networks, energy grids, health care records, and communications platforms.

Each of those sectors relies on digital integrity. When that integrity is compromised, the consequences do not stay contained to a single network. A disrupted financial system unsettles markets. A compromised power grid darkens cities. Manipulated data, introduced quietly at critical decision points, can distort policy choices in ways that take years to detect.

In “AI for Mankind’s Future,” I argue that the same capabilities creating genuine opportunity — automation, efficiency, broader access — also introduce new vulnerabilities when placed in irresponsible hands. We are not simply adding risk to existing systems. We are reshaping the threat environment in which they operate.

The Strategic Dimension

This challenge does not stop at corporate firewalls or federal networks. It is embedded in a broader contest between nations.

In “The New AI Cold War,” I trace how artificial intelligence has become a central axis of geopolitical competition. Rival powers are investing heavily in AI-enabled capabilities not only for economic advantage, but for intelligence gathering, influence campaigns, and offensive cyber operations.

When offensive tools become cheaper and more accessible, deterrence weakens. Attribution grows murky — a compromised system may not quickly reveal whether an intrusion was state-sponsored, criminal, or somewhere in between.

For families whose livelihoods, savings, and communities rest on the stability of these systems, that risk is not abstract. It is personal.

The Quiet Erosion of Trust

Beyond the strategic risks lies a subtler but equally serious consequence: the gradual fraying of confidence in the systems daily life depends on.

Digital infrastructure functions on trust — the assumption that transactions are accurate, identities are authentic, and the information driving decisions is dependable. When that trust erodes, behavior changes. Institutions are growing guarded. Commerce slows. Citizens disengage.

This is not a dramatic collapse. It is a steady unraveling — and once confidence is lost, restoring it is slow and costly.

What Clear Eyes Require

For Christians, these developments call for honest reflection, not panic or fatalism. The Bible has never promised that human institutions would be immune to failure or manipulation. “Put not your trust in princes, in a son of man, in whom there is no salvation” (Psalm 146:3). That counsel extends beyond political leaders to any structure — including technological ones — in which we place ultimate confidence.

Technology is a human creation. It carries the ingenuity, and the brokenness, of its makers.

That recognition does not counsel withdrawal from the digital world. It counsels clear eyes and responsible engagement — beginning at home.

Build for resilience, not just resistance. Families and organizations that can weather a breach are more valuable than those that assume one will never come. Hold those who develop and deploy AI to genuine account — innovative technology does not automatically yield ethical technology. And insist that human judgment — discernment, wisdom, moral responsibility — remains central to decisions about risk, not outsourced to automated systems whose designers do not share our values.

The locks protecting our digital world are under pressure they were never built to withstand. What this moment demands is not only stronger defenses, but wiser stewards — people who understand that the stability of our systems will depend not only on how well they are built, but on the character of those who build and use them.

That is a challenge no algorithm can solve.

Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.



Amplify Our Voice for Truth