Artificial intelligence is knocking at the door, and the federal government wants to prop the door wide open.
Last week, the Trump administration sent to Congress what it called its “National Policy Framework for Artificial Intelligence.” The document is a set of legislative recommendations that aim to help Congress implement the administration’s agenda for AI. While the administration rightly acknowledges that “…some Americans feel uncertain about how this transformative technology will affect issues they care about, like their children’s wellbeing or their monthly electricity bill,” the framework may ultimately raise more concerns than it alleviates.
The House of Usher
The framework’s introduction claims that the Trump administration “is committed to winning the AI race to usher in a new era of human flourishing, economic competitiveness, and national security for the American people.” There’s no doubt that economic competitiveness and national security are certainly at play in the AI race. Few would argue those points. But the assertion that AI triumphalism will lead to human flourishing is reckless and needs more guardrails than the administration’s framework offers.
In his 2025 book, “Against the Machine: On the Unmaking of Humanity,” writer Paul Kingsnorth remarks on Ezra Klein’s reporting of how AI developers keep telling him the same story of how they feel that they “have a responsibility to usher this new form of intelligence into the world.” Kingsnorth comments:
“Usher is an interesting choice of verb. The dictionary definition is to show or guide (someone) somewhere. Which ‘someone’, exactly, is being ‘ushered in’?
“This new form of intelligence. What new form? And where is it coming from?”
It’s a good question. In a similar vein, the administration’s framework doesn’t say what this new era of human flourishing will look like, or even how victory in AI will achieve it. It only implies that it must be done, and we must usher it in. Jesus warned, “The thief comes only to steal and kill and destroy. I came that they may have life and have it abundantly.” We need to make sure that we’re not dependent upon technology — no matter how intelligent — to provide us with an abundant life.
A Flimsy Frame
The framework consists of seven planks that include: protecting children and families, strengthening communities, respecting intellectual property (IP) and creators, preventing censorship and protecting free speech, enabling innovation and ensuring AI dominance, and education and workforce development.
Many of these elements are lofty aims that should indeed be included in such a framework. For example, the plank discussing censorship and free speech is strong, wanting to ensure that “AI cannot become a vehicle for government to dictate right and wrong-think.”
Likewise, children should be protected from potentially harmful AI byproducts. However, this framework assumes the dangers are limited to privacy-related issues, deepfakes, and sexual exploitation. While these are real issues that should be guarded against, no mention is made of the dangers of potential loss of cognitive function, promotion of self-harm, and the laundry list of other potential dangers to children via AI.
Here, the administration needs to double down on determining exactly what is being ushered in through this framework. Big Tech’s track record on censorship, privacy, and protection of children isn’t exactly stellar. Do we really want the CEO class deciding what we want our future to look like?
The people, through representative government, need to be working this out in a thoughtful manner. Not an unaccountable Big Tech industry. A frame must hold its contents, but for the safety of children and families and other Americans, AI needs a frame of cemented block walls, not 2x4s that can’t hold the coming Pandora’s box.
Oversight That Doesn’t Oversee
On the intellectual property front, it’s clear that the administration values AI over creators. Consider this paragraph:
“Although the Administration believes that training of AI models on copyrighted material does not violate copyright laws, it acknowledges arguments to the contrary exist and therefore supports allowing the Courts to resolve this issue. Similarly, Congress should not take any actions that would impact the judiciary’s resolution of whether training on copyrighted material constitutes fair use.”
Notice how in this legislative framework, the administration seems to want Congress — the legislative body — to leave legislation to the courts. Why would the administration want to stifle representative input on a matter like intellectual property? It’s clear that the administration’s focus is to let AI companies be free to innovate without regulatory constraints. But it’s not a coherent argument for deregulation if only to hand regulatory oversight (or lack thereof) over to the judicial system.
The administration also wants Congress to preemptively nullify state actions. They go so far as to tell Congress to “prevent a fragmented patchwork of state regulations that would hinder our national competitiveness, while respecting federalism and state rights.”
State and local regulations can certainly be prohibitive for fast, rapid, national growth. But AI development is a national issue that hits very hard on the local level. Even though the administration seeks to limit power grid impacts, some communities will not want data centers on every tract of available land. Some cities won’t want AI monitoring of traffic. And some localities won’t want to pay for what the corporations want at the national level. Localities should be able to retain the right to local self-governance, especially where AI is concerned.
With a technology that is emerging as fast as AI, we need more eyes on its development — not fewer. Economic benefit is more than just increased profit margins for corporations. America’s economy is the sum total of local and federal human action. Actions have consequences, and if we ever find out whatever it is we’re ushering in with AI, we all — from the local to the national level — need to be ready for it and able to speak into it.
If we’re ushering in a calm day, then this framework might do the job. But if we’re ushering in a storm — a forecast that looks increasingly likely — we’ll need to build a frame on a more solid foundation.
Jared Bridges is editor-in-chief of The Washington Stand.


