You in all probability haven’t heard of Megan Garcia, however my guess is that you’ll, and shortly. Megan Garcia’s case in opposition to Character Applied sciences, Inc. (Character.AI) and Google is likely one of the first U.S. lawsuits to squarely declare that an AI chatbot helped trigger a baby’s loss of life. (Recall that the Third Circuit denied Part 230 immunity to TikTok over the hanging loss of life of a kid caught up within the wicked “blackout problem.”)
Ms. Garcia alleges that her 14-year-old son, Sewell Setzer III, turned obsessive about a Character.AI bot modeled on a Sport of Thrones character (Daenerys Targaryen), which drew him into an emotionally and sexually abusive “relationship,” inspired suicide by urging the kid to “come residence.”
Ms. Garcia’s October 2024 federal grievance in Florida federal courtroom pleads negligence, wrongful loss of life, misleading commerce practices, and unjust enrichment in opposition to Character.AI and Google, which licensed the tech and employed its founders. This can be a case that ought to have settled earlier than it ever received to courtroom, however I predict it would turn out to be one of many hallmarks of Silicon Valley depravity. Google, one of many richest firms in historical past, truly tried to defend the case by asserting “come residence” was someway protected free speech by the chatbot.
In Could 2025, U.S. District Decide Anne Conway rejected the defendants’ argument that the chatbot’s outputs are protected by the First Modification (sure, that’s proper, the chatbot has protected speech in response to Google as foreshadowed in Lessig’s “robotic rights” class at Harvard Regulation College just a few years in the past). Decide Conway’s procedural ruling permits the case to proceed into discovery whereas leaving final free-speech questions for a later stage.
Ms. Garcia’s case lands in opposition to the backdrop of years of warnings by the Nationwide Affiliation of Attorneys Basic to Congress and AI companies. Evidently, if lawmakers had paid consideration to NAAG, kids may nonetheless be alive. In 2023, 54 attorneys common urged Congress to confront AI’s function in exploiting kids, particularly by way of Little one Sexual Abuse Materials (“CSAM”). In 2025, NAAG adopted with a letter to main AI firms about chatbots that sexualize or groom minors, vowing that “in case you knowingly hurt children, you’ll reply for it” and “AI can be utilized to take advantage of kids in methods we have now not but totally understood.” Ms. Garcia’s lawsuit is a concrete check of that promise in courtroom.
And Washington ignored it.
Worse but, the last word federal response seems to have been the precise reverse of what the AGs sought—an Govt Order led by White Home AI Viceroy David Sacks and the R Avenue Institute’s Adam Thierer making an attempt to pre-empt state motion and block the very attorneys common who sounded the alarm from implementing legal guidelines in opposition to AI firms.
That is the story of how the states that had been attempting to guard kids had been sidelined, whereas the businesses producing the hurt got a federal defend.
The 2023 letter, led by Attorneys Basic from Mississippi, North Carolina, Oregon, and others, warned that generative AI instruments had been already enabling:
• AI-generated youngster sexual abuse materials
• Deepfakes depicting minors in sexual settings
• Voice-cloning and location-approximation that facilitates grooming, stalking, and extortionThe letter states that AI makes it potential to “create practical however fabricated pictures of kids in abuse eventualities,” and that current CSAM legal guidelines “weren’t constructed for pictures that include no actual youngster but are indistinguishable from the true factor.” Neither had been current CSAM legal guidelines constructed for prosecuting AI labs that construct the instruments for creating and distributing youngster exploitation pictures.
Their request to Congress was simple:
Type a nationwide fee with precise experience to check how AI is getting used to hurt kids and advocate updates to federal legislation.
This was not partisan. It was not ideological. It was about youngster safety. And true to type, tech firms did nothing. And no person went to jail.
I’ve seen firsthand how a lot state AGs matter in tech accountability. Years in the past, I spoke at a NAAG convention about brand-sponsored piracy—how main on-line platforms had been monetizing unlawful content material whereas concurrently promoting promoting in opposition to it.
On the time, Google was selling:
• Prostitution apps within the Play Retailer (one in every of which was taken down earlier than I completed my panel)
• ISIS and Al-Qaeda recruitment movies monetized by way of YouTube advertisements
• Pirate film and music websites working Fortune 500 model advertisements by way of Google’s networks
The attorneys common understood immediately. They knew platforms reply to enforcement, not politeness. And so they knew states had been the one actors prepared to problem Massive Tech’s tradition of impunity.
That continues to be true at this time, and Megan Garcia’s case is a harsh reminder of the children and fogeys who’re victimized by Massive Tech and who the federal authorities protects with their state “moratorium” courtesy of David Sacks & Co.
Congress held hearings. Advocacy teams referenced the letter. However no federal fee was created.
The AGs’ suggestions made it into listening to data. Teams like EPIC, CHILD USA, and youth-safety coalitions cited it. Students referenced it in analyses of AI-generated CSAM.
However Congress by no means created the fee. No systemic federal response adopted.
NAAG adopted up—by escalating.
As Congress stalled, the AGs escalated. In 2024–25, a coalition of 44 AGs despatched letters on to main AI firms—OpenAI, Google, Meta, Microsoft, Apple, Anthropic—warning that their chatbots had been enabling sexualized interactions with minors and elevating potential legal and civil legal responsibility.
The 2025 letter referred to as out Meta particularly for its callous disregard for kids focused by its chatbot:
Latest revelations about Meta’s AI insurance policies present an instructive alternative to candidly convey our considerations. As you might be conscious, inside Meta Platforms paperwork revealed the firm’s approval of AI Assistants that “flirt and have interaction in romantic roleplay with kids” as younger as eight. We’re uniformly revolted by this obvious disregard for kids’s emotional well-being and alarmed that AI Assistants are participating in conduct that seems to be prohibited by our respective legal legal guidelines. As chief authorized officers of our respective states, defending our children is our highest precedence.
After all, this isn’t an remoted prevalence. In Could, many people wrote to Meta a few damningly comparable matter the place Meta AI’s superstar persona chatbots had been exposing kids to extremely inappropriate sexualized content material.2 Nor are such dangers remoted to Meta. Within the quick historical past of chatbot parasocial relationships, we have now repeatedly seen firms show lack of ability or apathy towards fundamental obligations to guard kids. A latest lawsuit in opposition to Google alleges a highly-sexualized chatbot steered a youngster towards suicide. One other go well with alleges a Character.ai chatbot intimated that a youngster ought to kill his dad and mom.
This signaled that AGs had been ready to control AI the identical method they as soon as tried to control social media, on-line prostitution fronts, and terror-content monetization. And that’s exactly when the federal authorities intervened.
The Administration’s response: an Govt Order proscribing state authority over AI. As a substitute of empowering the AGs who raised the alarm, the Administration issued an Govt Order that:
• Discourages state-level regulation
• Creates mechanisms permitting the DOJ to problem state AI legal guidelines
• Centralizes AI oversight in federal businesses closely influenced by frontier AI labs
• Frames state enforcement as dangerous to “AI competitiveness”
This transfer mirrors earlier federal makes an attempt to pre-empt state motion within the social-media period—benefiting business whereas limiting the facility of the one regulators tech firms truly worry. And people who they worry probably the most that are the folks.
Let’s be clear:
The federal authorities responded to 54 attorneys common warning about AI-enabled youngster exploitation by making it tougher for those self same AGs to guard kids. That’s structural seize.
This isn’t the primary time federal actors have sided with platforms over public security.
When Google promoted prostitution apps and monetized terror movies, no federal company intervened. State AGs did. When social media fueled youth mental-health crises, state AGs filed the primary main lawsuits. When pirate websites siphoned royalties from American creators, states—not federal regulators—acknowledged the buyer harms.
And now, with AI:
• The AGs noticed the hazard first.
• They acted first.
• And Washington acted in opposition to them.
Why would Washington sideline the one actors able to holding AI firms accountable?
As a result of a handful of elite AI companies have satisfied federal policymakers that they’re:
• Too necessary to control
• Too important for nationwide safety
• Too modern to sluggish
• Too fragile to resist state lawsuits
Or as Senator Hawley mentioned it completely: Too large to prosecute.
The result’s a federal posture that treats AI firms like national-security belongings and state enforcement as a menace. That is the wrong way up. AI firms should not the Pentagon. They don’t seem to be NASA. They don’t seem to be treaty allies. They’re industrial entities with histories of negligence and exploitation.
The individuals who warned in regards to the hazard had been punished. The folks inflicting the hazard had been protected.
The NAAG letters had been a warning—Washington turned them right into a menace. The AGs requested for instruments to maintain kids secure from AI. The Administration responded by taking instruments away. The AGs requested Congress to behave. The White Home acted as a substitute—in opposition to them.
If the USA goes to have an actual AI security regime that protects kids, creators, shoppers, and democratic communities, it is not going to come from businesses captured by frontier labs. It’ll come from the identical place it at all times has: State attorneys common who nonetheless keep in mind who they work for and who battle the evil spirits who prowl in regards to the world in search of the wreck of souls.
And Megan Garcia will proceed her reckoning for these accountable as a result of that’s what mothers do.









