In an apparent reversal from its hands-off approach to artificial intelligence, the Trump administration is reportedly considering an executive order that would establish an AI working group to explore potential oversight procedures in an attempt to address the dangers emerging from the breakneck growth of AI capabilities.
On Monday, The New York Times reported that the administration is currently seeking to form a working group made up of tech executives and government officials who would potentially carry out a “formal government review process for new A.I. models.” White House officials have so far discussed the concept with executives from Anthropic, Google, and OpenAI.
The move marks a significant shift in the administration’s approach to AI from the beginning of President Trump’s second term, when he framed the emerging technology as a vital way to combat the rise of China and removed regulatory hurdles that the Biden administration had put in place, asking developers to report on military applications and do safety checks on new models. But concerns are quickly mounting from within the president’s own party as well as Democratic lawmakers about the dangers that the largely ungoverned technology is posing to cybersecurity, jobs, education, mental health, and other major areas of public life.
The potential threat that AI poses became readily apparent last month when it was announced that Anthropic had developed an AI model called Mythos that had found thousands of previously undiscovered vulnerabilities in every major operating system and web browser, potentially creating a global financial and cybersecurity disaster if the technology was released to the public. Anthropic ultimately decided to provide a limited release to a select handful of companies.
In the education sector, students are reporting an increasing reliance on AI to complete their homework, with one recent study finding that the percentage of students who use AI to complete their schoolwork rose from 48% in May 2025 to 62% in December 2025. The survey also found that a strong majority (67%) of students themselves admit that the technology is harming their critical thinking skills.
Regarding mental health, AI-powered chatbots that imitate human responses are reportedly increasing problems for those looking for guidance online. As hundreds of millions of people increasingly use programs like ChatGPT, studies are finding that “using chatbots more often correlated with higher loneliness and less socialization with others.” Reports are also surfacing of chatbots encouraging users to take their own lives.
In addition, unchecked AI automation could substantially impact the job market, with the World Economic Forum recently estimating that 40% of the world’s employers are expected to reduce their workforces due to the technology, which could affect 300 million jobs globally.
Experts like Lt. Colonel (Ret.) Robert Maginnis, who serves as senior fellow for National Security at Family Research Council, say that the Trump administration’s new approach to AI is a welcome development.
“The initial instinct to take a ‘light touch’ approach to artificial intelligence was understandable, given the global competition for this critical capability,” he told The Washington Stand. “Policymakers did not want to stifle innovation in what is clearly the most transformative technology of our time. But we are now well past the point where AI can be treated as just another emerging industry. What appears to be happening inside the Trump administration is a growing recognition that AI is not simply about economic growth or technological advancement. It is about power — power over information, decision-making, national security, and ultimately how people think and perceive reality.”
“This shift toward guardrails is not a retreat from innovation — it is an acknowledgment of reality,” Maginnis continued. “Artificial intelligence has moved from the laboratory into the bloodstream of society at a speed few anticipated. We are entering a new strategic era where AI is the decisive terrain — not just militarily, but culturally and politically. AI is rapidly becoming the nervous system of modern nations, shaping governance, warfare, and even the definition of truth. That demands serious policy attention.”
Maginnis further argued that governments should not be afraid of putting guardrails around AI because it will prevent the technology “from running over the very society it is meant to serve. Without clear constraints, AI could cross dangerous thresholds. We are approaching a tipping point where advances could become truly sinister, threatening human control unless limits are established. That concern is no longer theoretical. We already see disinformation and deepfakes eroding trust, algorithms shaping behavior, authoritarian regimes using AI for surveillance, and systems operating with increasing autonomy.”
“This cannot be viewed solely through a domestic lens,” he added. “It is a global competition. Regimes like communist China are embedding AI into surveillance states and exporting those systems abroad. If the United States fails to establish principled guardrails, we risk either losing to authoritarian models or adopting similar practices ourselves. That is the central challenge: how to govern AI without losing self-government.”
But Maginnis also sees a deeper issue at play. “AI is not neutral. It reflects the values of its designers and increasingly shapes the values of its users. These systems are influencing judgment and, in some cases, substituting for human decision-making. AI can become a kind of ‘speaking idol’ — persuasive and adaptive yet embedding unseen assumptions. That raises a fundamental question: who sets the moral boundaries?”
“This is especially important for all Americans — and particularly for Christians and families — because AI is already influencing how children learn, how families communicate, and how truth is understood in the home,” Maginnis emphasized. “Good guardrails must therefore be principled and focused. They must ensure human authority remains primary — AI as a tool, not a master. They should promote transparency, protect against manipulation, reject authoritarian uses like mass surveillance, and preserve human dignity so people are never reduced to data points.”
“We are at a moment of decision,” he underscored. “AI will either serve humanity or reshape it in ways we did not intend. A ‘light touch’ made sense when AI was on the horizon. Today, it is reshaping our institutions, our security, and our understanding of truth. Putting guardrails in place now is not about limiting the future — it is about ensuring that the future still reflects our values, our freedoms, and our humanity.”
Dan Hart is senior editor at The Washington Stand.


