America’s Teens Are Turning to AI for Homework. That Should Worry All of Us.
When Pew Research Center published its February 2026 study on how American teenagers use artificial intelligence (AI), the findings landed with the quiet weight of something long suspected but not yet measured. Most American teenagers now use AI chatbots — and more than half report using them for schoolwork. One in 10 say AI handles “all or most” of their assignments. Nearly a quarter lean on it for at least some of their work.
Read those numbers again. Most teens see no issue with this.
Fifty-seven percent use chatbots to find information; 54% use them for homework help. Many turn to these tools to research topics, solve math problems, or polish their writing. A smaller but notable share goes further, using chatbots for casual conversation or even emotional guidance. What strikes me about the Pew data is not the high numbers alone — it’s the absence of concern among the students themselves. They view these tools as helpful, even advantageous for the road ahead. When a generation cannot recognize a problem, the problem is already well established.
In “AI for Mankind’s Future,” I argue that artificial intelligence can serve education powerfully — but only when students remain in the driver’s seat. AI can reduce cognitive load, assist with research, and help personalize instruction. The danger arrives when it becomes the default, when a student reaches for the chatbot before thinking the problem through. Every time that happens, the mental muscle required for independent judgment grows a little weaker. Educators call this metacognition, the ability to think about your own thinking, and it is precisely what homework is designed to build.
Homework is not busywork. Authoring essays sharpens thinking. Solving math problems builds logical discipline. Researching a topic teaches discernment — weighing sources, evaluating evidence, and forming your own conclusions. When an algorithm performs those functions on a student’s behalf, we are not saving time; we are skipping training. We may be producing a generation that holds credentials without having developed the judgment those credentials are supposed to represent.
That gap matters far beyond any classroom. In my forthcoming book, “The New AI Cold War,” I make the case that national strength in the AI era will not be determined by chips or data centers alone — it will be determined by human capital. The nation that develops the most adaptive, disciplined, and ethically-grounded people will hold the strategic advantage. From that perspective, education policy is national security policy. An American workforce that cannot think independently when systems fail — or when the information it receives has been manipulated — is not just less competitive; it is a strategic vulnerability.
The personal dimension of the Pew data may be the most unsettling. When teenagers turn to AI for emotional guidance or moral conversation, something important is being displaced. Adolescence is when young people develop their sense of who to trust, how to reason about right and wrong, and what they believe. AI systems, however sophisticated, do not mentor. They do not model character or share hard-won experience. They produce responses based on data without a consistent moral framework. A teenager who treats a chatbot as a confidant is not just getting poor advice — she is being quietly formed by a system designed by people who may not share her values or her faith. Artificial intelligence is not neutral. It carries assumptions embedded within its design. Young users unknowingly take in those assumptions.
None of this argues for banning these tools. It argues for adults taking the wheel.
Parents need to know which AI tools their children are using and establish a clear household principle: wrestle with the problem first, use AI for feedback afterward, and always disclose its use. Emotional and moral questions remain with people who know and love the child. Regular family conversations about how AI was used, whether it supported genuine learning or simply replaced effort, can build awareness before quiet dependency takes hold.
Schools need to move toward assessments that cannot simply be outsourced — in-class writing, oral explanations of student work, process-based grading that shows how a student arrived at an answer, not just the answer itself. Clear disclosure requirements reinforce the integrity that makes an education worth having. Teachers need training as much as students do; as I argue in “AI for Mankind’s Future,” whether AI narrows or widens inequality will depend largely on whether under-resourced schools receive adequate support or are simply left to absorb the consequences.
At the policy level, Washington should treat AI literacy as a genuine national priority — with privacy protections for minors, transparency requirements for platforms targeting young people, and investment in educator training. A generation conditioned to accept AI-generated summaries without scrutiny will also prove more susceptible to synthetic media and strategic manipulation, which makes civic resilience not a soft concern but a hard one.
Civilizations have never declined from a shortage of tools. They have declined when they stopped demanding that their people learn to think.
The question before us is not whether our children will grow up alongside AI — they already are. The question is whether we intend to raise thinkers or whether we will drift into raising people who have simply learned to prompt.
Robert Maginnis is a retired U.S. Army lieutenant colonel, senior fellow for National Security at Family Research Council, and the author of 14 books. His latest, "The New AI Cold War," releases in April 2026.


