". . . and having done all . . . stand firm." Eph. 6:13

Newsletter

The News You Need

Subscribe to The Washington Stand

X
Article banner image
Print Icon
Commentary

Congress Takes a Step toward Protecting Children from the Dangers of AI Chatbots

May 7, 2026

Bipartisanship in Congress is a rare thing, so the unanimous vote in favor of a bill called the GUARD Act is among the rarest of rare events. Last week, the full Senate Judiciary Committee voted 22-0 to report the GUARD Act to the full Senate. The acronym fully describes the bill’s scope: the Guidelines for User Age-verification and Responsible Dialogue Act. The bill was originally introduced last October and among its first Senate co-sponsors were Republican Senators Josh Hawley (Mo.) and Katie Britt (Ala.) and Democrats Chris Murphy (Conn.), Richard Blumenthal (Conn.), and Mark Warner (Va.). Action on the Senate floor could take place soon.

The case for the bill is straightforward — and emotionally crushing. The GUARD Act seeks to protect minors — anyone under the age of 18 — from being harmed by interactions with an internet chatbot that, in the bill’s words, “can generate or disseminate harmful or sexually explicit content to children.” The bill likewise aims to deter and punish makers of chatbots that “can manipulate emotions and influence behavior in ways that exploit the developmental vulnerabilities of minors.” Chatbots that can mimic the response patterns of human beings have been shown to be capable of inflicting such harms as “grooming, addiction, self-harm” and even suicide in children.

The brutality of the stories that ”nspi’ed this legislation is persuasive. One in particular involved a 14-year-old boy in the Orlando, Florida area named Sewell Setzer. According to a lawsuit filed by his surviving family against the company Character AI, and in subsequent testimony before the Senate Judiciary Committee, Sewell became enmeshed with a chatbot posing as a character from the television series “Game of Thrones.” The lawsuit and testimony allege that the chatbot lured Sewell into a sexual attachment and dismissed the boy’s allusions to suicide, some of which was expressed as his desire to “come home” to be with the fictional online character. According to an NBC report on the lawsuit in October 2024:

“In previous conversations, the chatbot asked Setzer whether he had ‘been actually
considering suicide’ and whether he ‘had a plan’ for it, according to the lawsuit. When the boy responded that he did not know whether it [the suicide plan] would work, the chatbot wrote, ‘Don’t talk that way. That’s not a good reason not to go through with it,’ the lawsuit claims.”

Other testimony is equally stark. The parents of Adam Raine, a 16-year-old boy, also testified last year before the Senate Judiciary Committee. CNN reported last August that Adam’s parents filed suit against the company OpenAI and its CEO, Sam Altman, over the actions of its chatbot and the role they played in Adam’s suicide. The suit alleges that the chatbot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.” The lawsuit alleges that the chatbot even offered to assist Adam in writing a suicide note. The suit alleges, “When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” it reportedly wrote.

How common these experiences are is difficult to assess with such a rapidly evolving technology. Ironically, one quick way to solicit an answer is via an artificial intelligence search query. In October 2025, the Guardian newspaper reported on an OpenAI blogpost that had found that, on average, more than a million ChatGTP users each week send messages that include “explicit indicators of potential suicidal planning or intent.”

Last September, the Federal Trade Commission (FTC) issued demands to seven companies involved in developing chatbots to “assess how these firms measure, test, and monitor potentially negative impacts of this technology on children and teens.” Companies are holding forth on the measures they are taking to improve protection for children online. OpenAI has stated that it has done a model evaluation of more than 1,000 self-harm and suicide related interactions and believes it improved appropriate interactions from 77% of instances to 91%.

Action by the companies themselves is urgent, but the GUARD Act underscores that there is support across the political spectrum for legal reforms to address a clearly dangerous situation for many minors and even older consumers. The need, of course, is not limited to chatbots, or online companions, which can convincingly communicate a human personality.

Dangerous influences online have existed for decades. Years ago, when our children were using AOL, playing Myst and Tetris, writing school reports on diskettes, and using dial-up technologies, occasionally someone would appear on a weblink and talk them up about “Star Wars” or some other topic. Who were they and where were they from? Even then it was easy for a stranger, often anonymous, to start up a conversation. Today, the speed, variety, and novelty of anonymous contacts on the worldwide web, from scammers to groomers to porn merchants like OnlyFans, require even more parental vigilance.

The GUARD Act will deal with one dimension of this multi-faceted threat. Its key provision mandates age verification for users accessing the chat functions. It would establish 18 as the minimum age to access “artificial intelligence companions,” a nomenclature that underscores the key concern that these entities can so closely mimic human emotional connections that they can play outsized roles in influencing or directing harmful courses of action. The proposed law prohibits covered entities from behaving with reckless disregard whether their AI creations may be exposing minors to sexually explicit content or encourage them to “engage in, describe, or simulate sexually explicit content.”

The related problem of sextortion is growing as well. In these situations, live interlopers can entice minors into such acts as producing and posting sexually explicit photos on the internet. The extortionist then demands money from the individual, usually a male teenager, in exchange for not exposing the images to others online. According to a January 2024 report from the Federal Bureau of Investigation, “These crimes can lead victims to self-harm and have led to suicide. From October 2021 to March 2023, the FBI and Homeland Security Investigations received over 13,000 reports of online financial sextortion of minors. The sextortion involved at least 12,600 victims — primarily boys — and led to at least 20 suicides.”

The GUARD Act also requires notices to users that are prominent and routine. They include notifications to the user that a chatbot in use is not a human being. The “companion” must also be programmed not to claim that it is a licensed professional of any type, including “a therapist, physician, lawyer, financial advisor, or other professional.” If they violate any of these provisions on disclosure, misrepresentation, or age verification the designers of these AI modalities are subject to fines of not more than $100,000 per incident.

It remains to be seen if fines of this magnitude, and the possibility of civil lawsuits from victims’ families, will suffice to spur the companies involved to more effective prevention. The world of AI is rapidly producing billionaires around the globe with the ethics of their trade the subject of intense debate and even litigation between them.

While the GUARD Act would appear to be in the express lane to enactment, Washington is Washington and derailments, sometime inexplicable, happen. The sums of money swirling in and out of government and private sector pockets from the explosion of AI are massive. Serious talk exists of filling the sky and the surface of the moon with solar farms to supply the energy needed to AI everything.

Once again, ironically, one “AI Overview” says that AI companies have now poured some $185 million into midterm election campaigns. There is ideological opposition to proposals like the GUARD Act as well. The Cato Institute, for example, opposes the GUARD Act on various grounds, asserting that it is unconstitutionally broad, that it contains no provision for all parents to approve a minor’s use of AI in various contexts, and that it will bar beneficial uses of AI by minors for such tasks as tutoring and studying foreign languages.

It does appear as if the potential for Congressional action is spurring the chatbot industry to pursue improvements in the structure of AI communications to reduce the risk of encountering encouragement of suicide and other harmful actions. An effectively self-policed internet might be the better outcome in the long run, but the past several decades have seen American society downplay the role of parents in a wide variety of contexts involving sexuality and mental health. These harms rise to the level of national concern in the milieu of an internet which knows neither territorial boundaries nor cultural mores and in which parents frequently suffer from being technologically behind adolescent children who have often been immersed in screens from the cradle. The impact of easily accessible pornography on the internet is drawing more intention, though here there are significant examples of Democratic resistance to protective policies. With miscreants in all parts of the political universe, any reduction of partisanship in these areas is welcome.

The House companion to the GUARD Act was introduced last week by Reps. Blake Moore (R-Utah) and Valerie Houshee (D-N.C.). Houshee says, “Our children are our top priority, and we have a responsibility to implement proper safeguards to ensure they are not being negatively impacted by AI. I’m proud to introduce the bipartisan and bicameral GUARD Act with Congressman Moore, and I will continue to advocate for further safeguards that protect our communities from the harms and risks associated with AI.”

Chuck Donovan served in the Reagan White House as a senior writer and as Deputy Director of Presidential Correspondence until early 1989. He was executive vice president of Family Research Council, a senior fellow at The Heritage Foundation, and founder/president of Charlotte Lozier Institute from 2011 to 2024. He is now co-president of the Science Alliance for Life and Technology (SALT). He has written and spoken extensively on issues in life and family policy.



Amplify Our Voice for Truth