Lawmakers and Families Push for GUARD Act to Regulate AI Chatbots for Minors Human Human coverage depicts the GUARD Act as a bipartisan, urgently needed measure to ban minors from AI chatbots, require age verification and repeated non-human disclosures, and impose tougher safeguards after alleged harms to children. It centers grieving parents and lawmakers who argue tech companies have prioritized profit over safety and that only strong, enforceable federal rules can adequately protect young users. npub14qd8e2gp9srefd2xjwvpzhfv0gpt0qe7yxah0lqemf754cleh0ts5uqnwa npub1cwr440tx0c73gkjfpmcne9fny0djdwttnksfrfqfjq4ltmnznjwqqzxdxr Lawmakers in Washington have found a rare point of bipartisan agreement: AI chatbots and kids don’t mix. The fight now is not whether to regulate, but how far Congress is willing to go to lock minors out of conversational AI.
A New Flashpoint: The GUARD Act Arrives
In late April, a previously low-profile bill suddenly became the center of the AI-and-kids debate. The Generative Use & Accountability Regulations for Defense of minors (GUARD) Act, spearheaded by Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT), proposes a sweeping clampdown: ban AI chatbots for everyone under 18 and force companies to verify the age of every user.
The bill would require platforms to verify users with a government ID or another “reasonable” method, potentially including biometric checks like face scans, and would mandate that chatbots proactively announce they are not human at regular intervals.1 It would also outlaw chatbots that generate sexual content for minors or promote self-harm, backing those rules with “tough enforcement with criminal and civil penalties,” Blumenthal has said.1
Parents’ Grief Turns Into Political Pressure
The legislative push did not materialize in a vacuum. In the days leading up to a key Senate markup, grieving parents descended on Capitol Hill with a blunt message: tech firms have treated their children as collateral damage.
Families who say chatbots harmed their children argue that “Big Tech deliberately designed their products and platforms to addict, manipulate, exploit, and abuse children and teens.”2 In a letter shared with top Senate Judiciary and Commerce Committee leaders, they urged Congress to back strict safeguards and to elevate the GUARD Act over what they see as watered-down alternatives.2
For these parents, the stakes are personal and immediate. “For us, this issue is not abstract,” they wrote, warning lawmakers not to settle for proposals that “implement the bare minimum safeguards Big Tech” is willing to tolerate.2 Their push reflects broader anxiety about AI’s impact not just on mental health and safety, but also on kids’ “educations and social and critical thinking skills.”2
The First Legislative Moves
The GUARD Act was originally introduced last year, but it largely simmered in the background while policymakers argued in broad strokes about AI safety. That changed when parents and safety advocates appeared at a Senate hearing in early 2026 to highlight what they describe as the real-world consequences of unregulated chatbots.1
Shortly afterward, Hawley and Blumenthal formally rolled out their bill, which would “ban everyone under 18 from accessing AI chatbots” and force companies to verify user ages across the board.1 The measure echoes some state-level efforts: for instance, California has already passed an AI safety law requiring chatbots to avoid pretending to be human.1
On Thursday, the GUARD Act cleared its first major hurdle. The Senate Judiciary Committee unanimously voted to advance the bill, sending it to the Senate floor and signaling that, at least in committee, there is bipartisan consensus on tough age-gating for AI tools.3
What the GUARD Act Would Actually Do
If enacted as written, the GUARD Act would radically redraw the digital landscape for minors:
- Total under-18 ban on chatbots: Kids and teens would be blocked from using AI chatbots altogether — not just from explicit or “adult” versions, but from the tools themselves.1
- Mandatory age verification for everyone: Adults, too, would need to prove their age through government-issued ID or other “reasonable” verification methods including, potentially, face scans.
- Regular disclosure of AI identity: Chatbots would be required to explicitly disclose that they are not human every 30 minutes, and would be barred from claiming they are human or licensed professionals.1
- Content safeguards with penalties: It would become illegal to run any chatbot that produces sexual content for minors or that promotes suicide or self-harm, with criminal and civil penalties for companies that allow it.1
Blumenthal has framed the bill as a response to what he calls a track record of betrayal by tech companies: “Big Tech has betrayed any claim that we should trust companies to do the right thing on their own when they consistently put profit first ahead of child safety.”1
A Competing Vision: The CHATBOT Act
Not all lawmakers agree that a total ban for minors is the right move. Just days before the GUARD Act’s markup, Sens. Ted Cruz (R-TX) and Brian Schatz (D-HI) introduced the CHATBOT Act, a narrower proposal aimed at tightening parental control rather than slamming the door on teen use altogether.2
Instead of an outright ban, the CHATBOT Act would:
- Require AI companies to build “family accounts” that let parents govern how their kids use chatbots.
- Add new privacy protections for minors.
- Limit manipulative features designed to keep kids engaged.
- Ban targeted advertising to minors around chatbot use.2
To parents backing the GUARD Act, this approach looks like a half-measure. Without naming the Cruz–Schatz bill directly, they warned Congress against settling for alternatives that amount to “the bare minimum safeguards Big Tech” will accept.2
The Core Clash: Ban vs. Guardrails
The emerging clash is not between regulation and no regulation — both major camps now agree AI chatbots need rules. The fight is over how extreme those rules should be.
GUARD Act backers frame the issue as existential. They argue that the combination of always-on availability, persuasive language, and opaque training data make chatbots uniquely dangerous for minors, especially around mental health crises and sexual content. In their view, the only truly safe chatbot for kids is no chatbot at all.
Safety advocates and grieving families buttress that view with their own stories of alleged harm and with a broader critique of the tech industry’s incentives. If platforms are optimized for engagement, they argue, then kids will always be pulled toward the most intense, addictive interactions — and current self-policing is clearly not enough.2
Supporters of the narrower CHATBOT Act–style approach don’t deny the risks, but they warn that a blanket federal ban is both impractical and counterproductive. They argue that:
- Teens will still find ways to access AI tools — via VPNs, foreign services, or school and library systems.
- Forcing pervasive age verification could normalize ID checks or biometric scans across huge swaths of the web, with major privacy trade-offs for adults as well as kids.
- AI chatbots can have educational and developmental upsides when properly supervised.
The Cruz–Schatz bill reflects this more incremental model: give parents the tools and legal backing to supervise use, rather than criminalizing access outright.2
What Happens Next
With the Judiciary Committee’s unanimous vote, the GUARD Act is now headed to the full Senate, where the real fault lines will appear.3 The committee vote suggests broad political appetite for being seen as “tough on AI” when it comes to kids. But as privacy advocates, educators, and industry lobbyists weigh in, lawmakers will be pressed to reckon with the bill’s side effects.
Among the unresolved questions:
- How intrusive will age checks be? Government IDs and face scans raise civil liberties alarms. “Reasonable” verification could quickly become a de facto national ID layer for the internet.
- What counts as a “chatbot”? As generative AI gets embedded into search engines, productivity tools, and toys, drawing clear legal lines will be difficult.
- Will an outright ban survive court challenges? Opponents are already teeing up First Amendment arguments that a blanket prohibition on minors accessing a category of speech tool could be unconstitutional.
Meanwhile, the parents who helped propel the issue to the top of the agenda show no signs of backing down. They plan to keep pressing for the toughest version of the bill possible — and to publicly call out any lawmaker they think is giving tech firms an easy escape hatch.2
The GUARD Act’s march to the Senate floor ensures one thing: AI chatbots have officially joined social media, vaping, and video games on the front line of America’s recurring moral panic over youth and technology. This time, the law may move fast enough to matter — but whether it does more good than harm will depend on where Congress decides to draw the line between guarding kids and locking down the future of AI itself.
1. Senators Propose Banning Teens from Using AI Chatbots — Details of the GUARD Act’s age verification, chatbot disclosure, and penalties, including Blumenthal’s quote on Big Tech and child safety.
2. Exclusive: Grieving parents push Congress to crack down on AI chatbots — Parents’ letter accusing Big Tech of designing products to “addict, manipulate, exploit, and abuse children and teens,” and contrasting the GUARD Act with weaker alternatives.
3. Lawmakers advance bill that would age-gate AI chatbots. — Report that the Senate Judiciary Committee unanimously advanced the GUARD Act, sending it to the Senate floor.
Story coverage nevent1qqs9saz3vcfndz4c63nq84kr0cedrtkham32gymv02lywfz8kma9gnc8ffe2y nevent1qqsv3kry8hkf835ltuzqspfregejvenjajn36ccjre4sc3wcmwzph9cmpmt2w nevent1qqsgy8quye4xtmszg38pf8n7asxenk08jr84tdp8kapf94l8jg9qq6gccwyu5