Two landmark jury decisions this week found social media giants Meta and YouTube responsible for causing harm to children through their platforms. The verdicts in Los Angeles and New Mexico mark the first time juries have sided with families over Big Tech companies in cases involving youth mental health and safety.

Concerned parents, medical professionals, teachers, and industry insiders have long argued that social media platforms damage young people’s psychological well-being and contribute to addiction, body image issues, predatory behavior, and self-harm.
This week marked a historic turning point as juries in two separate states sided with these concerns for the first time.
A Los Angeles jury on Wednesday ruled that both Meta and YouTube bear responsibility for damages caused to children using their platforms. Meanwhile, in New Mexico, jurors concluded that Meta deliberately harmed young users’ mental health while hiding information about sexual exploitation of minors on its services.
Child advocacy organizations, families, and technology oversight groups celebrated these landmark decisions.
“The era of Big Tech invincibility is over,” declared Sacha Haworth, executive director of The Tech Oversight Project. “After years of gaslighting from companies like Google and Meta, new evidence and testimony have pulled back the curtain and validated the harms young people and parents have been telling the world about for years.”
Although it remains uncertain whether this week’s rulings will force fundamental shifts in how social platforms handle younger users, these twin verdicts indicate a dramatic change in public opinion toward technology companies. This shift will likely spawn additional litigation and regulatory action. The companies have historically maintained that any harm to children represents unintended consequences rather than deliberate design choices, attributing problems to broader social issues or individuals exploiting safety measures. They have consistently challenged research linking psychological damage to social media usage.
During testimony in the Los Angeles case, Meta CEO Mark Zuckerberg was questioned about whether addictive products generate more user engagement. “I’m not sure what to say to that. I don’t think that applies here,” Zuckerberg responded.
These verdicts demonstrate the public’s increasing readiness to demand accountability from these corporations and push for substantial operational changes. However, whether the companies will respond meaningfully remains unclear. Both Meta and Google have announced disagreement with the rulings and are considering legal challenges, including appeals.
Arturo Béjar, a former Meta engineering director who spent years warning internally about Instagram’s dangers before congressional testimony in 2023, believes jury trials “level the playing field” against these trillion-dollar corporations. However, he emphasized that actual regulatory intervention will be necessary to control them.
“One thing that I saw working inside the company that effectively led to behavior change was when an attorney general or the FTC stepped in and required things of the company,” he explained. “Both New Mexico and Los Angeles and all the attorneys general that are part of this process have really an extraordinary opportunity and the ability to ask for meaningful change.”
Though both cases centered on child safety, they differ significantly in approach. New Mexico’s lawsuit, filed by state Attorney General Raúl Torrez in 2023, involved state investigators creating fake child profiles on social media to document sexual solicitations and Meta’s responses. The jury determined whether Meta violated New Mexico’s consumer protection laws.
The Los Angeles case involved a single plaintiff, identified as KGM, suing Meta, Google’s YouTube, TikTok and Snap. TikTok and Snap reached settlements before trial. The plaintiff argued that Meta and YouTube deliberately designed addictive features targeting young users. Since thousands of families have filed similar claims, KGM and several other plaintiffs serve as bellwether cases—test trials that will guide broader settlements similar to those seen in Big Tobacco and opioid litigation.
By concentrating on intentional design decisions and product liability, these lawsuits avoided Section 230 protections, which typically shield internet companies from responsibility for user-generated content. Previous lawsuits focusing on content distribution often failed due to these protections.
“For the first time, courts have held social media platforms accountable for how their product design can harm users,” explained Nikolas Guggenberger, an assistant professor of law at the University of Houston Law Center. “This is a new legal territory that could reshape an industry long shielded by Section 230. Platforms will have to rethink their focus on engagement at any cost, which has outlived itself.”
While final resolutions may take years through appeals and settlement negotiations, experts note that public perception of social media dangers is already shifting. A 2025 Pew Research Center survey found 48% of teenagers believe social media harms their age group, compared to just 32% in 2022.
As social media faces increased scrutiny, artificial intelligence chatbots represent the next battleground in making technology safer for young people.
“You can ban today’s harm, but how do you know what tomorrow is going to bring?” asked Sarah Kreps, a professor and director of Cornell University’s Tech Policy Institute. She noted that whether it involves new social media applications, AI, or other emerging technologies, innovation will continue.
“And people will flock to those because where there’s demand you will see a supply come to meet that demand,” she added.
Southeast Asian Nations Turn to Nuclear Power to Meet AI Data Center Energy Demands
US-Iran Tensions Escalate as Tehran Controls Key Oil Shipping Route
Congress Faces Growing Pressure to End 41-Day Government Funding Stalemate
Australia’s Resources Minister Says France Eager to Invest in Critical Minerals