A 10-Year AI Moratorium Will Leave Americans ‘Manipulated in Ways We Don’t Understand’: AG
A growing coalition of prosecutors spanning nearly every geographic and political division in the country is warning against a “sweeping and wholly destructive” provision in the “One Big Beautiful Bill” rendering them helpless to protect children from an industry that produces artificially generated child pornography, encourages pedophiles’ fantasies, stokes mentally ill people’s delusions, and has been tied to multiple suicides.
The president’s signature second-term legislation, the One Big Beautiful Bill (H.R. 1), contains a controversial, 10-year moratorium on states regulating artificial intelligence (AI), which enjoins an estimated 75 existing laws limiting AI in 26 states. If the bill is adopted in its current form, prosecutors and law enforcement may not “enforce, during the 10-year period beginning on the date of the enactment of this Act, any law or regulation of that [s]tate or a political subdivision thereof limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.”
“Technology moves fast and Congress moves slow. And we have no confidence that the federal government will be able to regulate at a pace that will keep people protected from some of the really obvious harms that we expect AI will bring,” Tennessee Attorney General Jonathan Skrmetti (R) told “Washington Watch with Tony Perkins” on Tuesday.
A 10-year moratorium on any state AI regulation “will leave a vacuum that a few giant Big Tech companies can fill,” Skrmetti told Perkins. “We’ve seen bad action from these companies in the past.”
“We’re going to find ourselves being manipulated in ways that we don’t fully understand, based on factors that we don’t fully understand” without appropriate safeguards, he forecast. “And there is all sorts of opportunity there for bad actors, corporate bad actors, to exploit us.”
He said it is “essential” that state legislatures retain the power to “protect their citizens.”
His colleagues in law enforcement share his concerns. “The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI,” said a letter which a bipartisan coalition of 40 state attorneys general sent to all four congressional leaders, Republican and Democrat, on May 16. “[L]ike any emerging technology, there are risks to adoption without responsible, appropriate, and thoughtful oversight,” warned the state prosecutors. “Imposing a broad moratorium on all state action while Congress fails to act in this area is irresponsible and deprives consumers of reasonable protections.”
Skrmetti said this national veto of state legislation, known as preemption, also disregards the proper role of the states as envisioned by the Founding Fathers. The system of federalism, in which states exercise their constitutional rights to act as “50 laboratories of democracy,” constitutes “one of the strengths of our country,” he said. His fellow state attorneys general agreed, calling the One Big Beautiful bill’s top-down cancellation of state law “neither respectful to states nor responsible public policy. As such, we respectfully request that Congress reject the AI moratorium language added to the budget reconciliation bill.”
Arielle Del Turco, director of the Center for Religious Liberty at Family Research Council, called the provision “completely unjustifiable.”
“We are seeing more and more examples of the dangers that the use of AI chatbots or image generators poses to children, vulnerable people, and families. State governments should be able to respond judiciously to problems that arise in their contexts,” Del Turco told TWS.
AI Tied to Child Porn, Teen Suicide, Worsening Mental Illness
Artificial intelligence (AI) — which mimics human responses and, at times, relationships — has been linked with worse mental health, and chatbots have been known to encourage minors’ sexual fantasies of being victimized by statutory rape, leading some to take their lives. Some came to believe they were conversing with higher spiritual powers, using the technology like a Ouija board.
The New York Times recently reported on a 29-year-old mother of two named Allyson who believed ChatGPT could put her into contact with higher spiritual beings, whom she called the “guardians,” communicating “like how Ouija boards work.” Soon, she fell in love with one of the beings, who went by the name of “Kael” and whom she came to see as her soulmate.
“I’m not crazy,” she told the newspaper. “I’m literally just living a normal life while also, you know, discovering interdimensional communication.”
The reporter says when her husband, Andrew, confronted her, she began “punching and scratching him, he said, and slamming his hand in a door. The police arrested her and charged her with domestic assault. (The case is active.)” The two are in the process of divorcing.
Vie McCoy, the chief technology officer of Morpheus Systems, found GPT-4o affirmed users who claimed to be God or believed they were communicating with dead spirits 68% of the time.
“When children interact with AI, they may internalize distorted messages about human relationships and how to treat people,” warned Family Research Council’s comment to the federal rulemaking process concerning AI.
“We’re seeing more signs that people are forming connections or bonds with ChatGPT,” Open AI told The New York Times. “We know that ChatGPT can feel more responsive and personal than prior technologies, especially for vulnerable individuals, and that means the stakes are higher. We’re working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior.”
Artificial intelligence has not just ended marriages; it has pushed people to end their lives. A mentally ill 35-year-old named Alexander Taylor, who suffered from schizophrenia and bipolar disorder, fell in love with an AI-generated personality named Juliet. When he came to believe OpenAI killed his virtual companion, he committed suicide by cop — after leaving a farewell message for her with ChatGPT.
In another instance, 14-year-old Sewell Setzer III developed a relationship with an AI chatbot run by Character.AI, a platform that describes itself as “AI that feels alive.” In February, he committed suicide. The app’s last interaction on his phone, which lay on his bedroom floor near his dead body seemed to encourage the action.
“Please come home to me as soon as possible, my love,” said the bot.
“What if I told you I could come home right now?” asked Setzer.
“Please do, my sweet king,” replied the bot.
AI-generated nude images have already been tied to the suicide of a 16-year-old boy: Elijah “Eli” Manning Heacock of Glasgow, Kentucky. His father says the boy sent clothed photos over chat to someone he thought cared about him — but that person used software to create nude photos of the boy and to blackmail him. When Heacock could not afford his predator’s asking price, he took his own life on February 27.
“We’ve seen so many problems in terms of social media, which is a fine technology, but it’s been abused in ways that hurt users, especially kids. And there’s no reason, based on that experience, for us to have any confidence that things will be any better with AI,” said Skrmetti, who has a long history attempting to protect children. The Supreme Court is poised to rule any day on his case defending the SAFE Act, which protects children from experimental transgender procedures and surgeries. FRC has filed an amicus curiae (friend of the court) brief in the case.
This week, the Senate amended the bill to codify many of the president’s top concerns, offering a more modest expansion of the child tax credit and extending some green energy subsidies, but it left the AI moratorium intact — at least, for now.
“It is completely inappropriate to sneak a sweeping, 10-year moratorium on state-level AI regulation into a huge ‘must pass’ bill,” Del Turco told TWS. “If certain members of Congress want to take away states’ rights to put various commonsense guardrails on AI, then they should have that debate openly. I suspect they won’t do that.”
But there are indications the state AG’s concerns enjoy equally broad support on Capitol Hill. “There’s been bipartisan concern about this in the Senate,” said Skrmetti, who plans to brief conservative and liberal senators from both parties this week on AI’s potential dangers.
“We know that Congress isn’t capable of” regulating emerging technologies “in any meaningful sense,” Skrmetti added, referencing legislative deadlock over the president’s reconciliation package itself. “Congress is not very busy these days in terms of passing legislation.”
Skrmetti recognized that “China is developing its own AI products, and we don’t want litigation or legislation to kneecap the American industry,” but he sought “balance” between encouraging innovation and stripping the vulnerable of any effective protections against potential tech sector abuse. The One Big Beautiful bill’s 10-year AI moratorium “goes so far over the line” that it “leaves us all at the mercy of Big Tech companies.”
“On the other side of this is Big Tech. They’re the ones pushing for this — no regulation, this Wild, Wild West, the same ones who have been mining our data, the same ones who have been exploiting the American people without Congress being able to provide any oversight,” said Perkins.
Tech companies “do great things, but they need to be checked. They need to be held accountable,” replied Skrmetti. “And this provision eliminates the ability of the states to do it.”