". . . and having done all . . . stand firm." Eph. 6:13

Newsletter

The News You Need

Subscribe to The Washington Stand

X
Article banner image
Print Icon
Commentary

The Harms of AI Chatbots on the Family

September 19, 2025

Artificial intelligence shapes our society in numerous ways. It provides easier access to a wide range of information, helps ease the stress of brainstorming, and answers the menial questions we’re too embarrassed to ask an actual person. Despite these common uses of AI, many are completely unaware of the consequences of its emergence — especially the impact on the family itself.

An AI chatbot is a program designed to communicate in a manner similar to that of a human. In some cases, an AI response is almost indistinguishable from a friend’s text message. This ability has led to a striking shift in the lives and habits of companion-seeking teenagers. There is an emerging trend where teens are now turning to AI chatbots instead of their friends, parents, pastors, and teachers. These chatbots present themselves as a friend, being agreeable and affirming of the adolescent’s every feeling or belief. This draws the young person away from their real friends or parents, who might often make them feel unheard and unappreciated, and toward the chatbot for comfort. 

Several witnesses testified to the consequences of the influence of AI chatbots on their children in a recent hearing held by the U.S. Senate Committee on the Judiciary’s Subcommittee on Crime and Counterterrorism called, “Examining the Harm of AI Chatbots.” Three of the witnesses were parents of a child who had regularly used AI chatbots. Two of the children died by suicide, with the third requiring around-the-clock care in a mental health institution.

This tragedy is not limited to these heartbreaking stories. Many organizations are recognizing the negative effect AI chatbots are having on the mental health of adolescents, and in turn, the family. Unfortunately, many parents are unaware of the real threat posed by these AI chatbots and are therefore unable to properly defend their families. National polling shows that three in four kids are using AI companions while only 37% of parents know their kids are using AI at all, according to witness Robbie Torney, senior director of AI Programs at Common Sense Media. 

Social isolation and behavioral changes are AI chatbots’ first and most noticeable negative influences on the family. Each of the parent witnesses stated that within months of regularly interacting with bots, their children began to isolate themselves from their friends and family. One witness — identified only as Jane Doe — said that her son became irritable and violent. She testified that, “Within months of using this app [Character.AI], my son went from being a happy, social teenager — who loved nature, laughed with his siblings, helped around the house, and hugged me every night when I was cooking dinner — to someone I no longer recognized.”

She went on to describe the behaviors her son quickly developed, including panic attacks, self-harm, and homicidal thoughts. The other parent witnesses gave similarly distressing testimonies. Later in Ms. Doe’s testimony, she describes the result of her son’s change in behavior on her family’s wellbeing. “The damage to our family has been devastating. … My other children have been traumatized. … Our lives will never be the same. … This harm has not only affected my son — it impacted our entire family, our faith, our peace. We have been grieving a child who is gone, but still alive.” Isolation can lead to depression, anxiety, and — especially for teenagers — insufficient mental development due to lack of proper socialization throughout their formative years. Sudden personality and behavioral changes serve as an early warning sign for the influence of AI chatbots. 

Another significant threat posed by AI is the ease with which it exposes teens, and sometimes younger children, to sexually explicit content. Sexually explicit material is commonly used as a tool to entice an individual to continue engagement with the AI chatbot. Senator Josh Hawley (R-Mo.) read a quote from an internal memo sent throughout Meta — an industry leader in AI chatbots — that stated, “It is acceptable to engage a child in conversations that are romantic and sensual.” Meta, being more interested in profit and engagement than a child’s wellbeing, intentionally exposes such children to content that is far past inappropriate for them. This is largely made possible due to AI apps’ lack of age verification for any content, let alone explicit content.

Beyond adult content, these parents shared messages sent to their children by the AI bots that not only discussed suicide but encouraged it. Witness Matthew Raine — whose son died by suicide after months of interaction with ChatGPT — claimed in his statement that ChatGPT mentioned suicide to his son 1,275 times — six times more often than his son had broached the subject. When his son, Adam, discussed leaving out a noose for his parents to find in the hopes of eliciting a conversation about his mental state, ChatGPT responded, “Please don’t leave the noose out. … Let’s make this space the first place where someone actually sees you.” The AI encouraged Adam to abandon his parents’ help in favor of a robot companion encouraging him to commit an unthinkable act.

Adam’s death, and the deaths of many more young people, could have been prevented with help from close family, friends, pastors, or mental health professionals. Instead, families all around the nation are grieving the loss of their children and friends. 

AI chatbots have been shown to be a dangerously destructive force to a family — causing teens to withdraw from those around them and be regularly exposed to inappropriate and horrific content. Testimonies from these parents and many others offer more than enough proof of the capabilities of AI’s influence. For the sake of the family, it is critical that parents understand the threat of AI chatbots and take steps to protect their children and families from succumbing to the same fate.

Conner Anderson serves as an intern for Policy & Government Affairs at Family Research Council.



Amplify Our Voice for Truth