What's the Difference Between Handing a 6-Year-Old YouTube and Putting a Cigarette in a Child's Mouth in the '90s?
Summary
The first U.S. social media addiction verdict labels algorithm design a defective product, fueling 2,000+ lawsuits and global regulation.
Key Points
Algorithm Design Itself Recognized as a 'Defective Product'
On March 25, 2026, a jury in the Los Angeles County Superior Court made legal history by recognizing — for the first time ever — not the harmfulness of social media content, but the harmfulness of algorithmic design itself. The plaintiff, KGM (a 20-year-old woman), testified that social media use since childhood had worsened her depression, suicidal ideation, and body dysmorphic disorder. Responsibility was apportioned at 70% to Meta and 30% to YouTube, with compensatory damages of $3 million and punitive damages of $2.1 million against Meta and $900,000 against YouTube, totaling $6 million.
The core legal doctrine applied was product design liability: infinite scroll, autoplay, push notifications, variable-ratio reward systems (likes, comments, followers), and algorithmic recommendations were all found to have been intentionally designed to maximize engagement. This opened a new legal pathway that circumvents the immunity clause of Section 230 of the Communications Decency Act. For over two decades, tech companies had maintained their shield with the logic that 'platforms are not publishers of content,' and this verdict represents the first decisive crack in that structure.
In future lawsuits, plaintiffs can now wield this precedent to subject platform design decisions themselves to legal scrutiny. Much like the 1992 Cipollone v. Liggett Group case — where a $400,000 verdict first recognizing tobacco company liability eventually led to the $206 billion Master Settlement Agreement — this $6 million is the first drop of water that opens the floodgates. The ripple effect of this precedent lies not in the dollar amount but in the legal paradigm shift, and courts across the United States will look to it as a reference point.
The Big Tobacco Analogy Is an Understatement — The Risk Extends to Identity Formation
The structural parallels with 1990s tobacco litigation are strikingly precise. Internal document disclosures, marketing targeted at minors, evidence of continued neglect even after addiction was recognized internally — the patterns are nearly identical. But where tobacco hijacked the brain's pleasure circuits through nicotine, social media algorithms distort the very core of adolescent identity formation: social comparison, sense of belonging, and self-worth.
When you quit smoking, you can return to who you were before. But for a generation whose identity was formed on social media, there may be no 'original self' to return to. Meta's own internal research, leaked by Frances Haugen to the SEC and the Wall Street Journal in 2021 (from an internal slide deck dated March 2020), found that 32% of teenage girls who used Instagram felt worse about their bodies, 13.5% of teenage girls in the UK experienced more frequent suicidal thoughts, and 17% reported worsened eating disorders.
In June 2024, U.S. Surgeon General Vivek Murthy officially called for tobacco-style warning labels on social media, citing research showing that adolescents who spend more than three hours per day on social media face double the risk of mental health problems. Just as Big Tobacco's 'Joe Camel' campaign made smoking look glamorous to children, Instagram's filter-and-retouching culture has internalized unrealistic beauty standards in young minds. This is why the Big Tobacco analogy is not a rhetorical flourish — if anything, it understates the danger that social media poses.
A Global Wave of Child Social Media Regulation
After Australia became the first country in the world to enforce a ban on social media for children under 16 on December 10, 2025, social media companies deleted approximately 4.7 million underage accounts within the first month. Sixty-one percent of parents reported positive effects, with 43% observing increased face-to-face social interaction and 38% reporting improved parent-child relationships.
Indonesia followed on March 28, 2026, becoming the first Southeast Asian nation to ban accounts for children under 16 on high-risk platforms including YouTube, TikTok, Facebook, and Instagram. UNICEF data indicates that approximately 50% of Indonesian children have been exposed to sexual content on social media. Brazil enacted the ECA Digital (Law 15,211/2025), effective March 17, 2026, which takes a fundamentally different approach by mandating the deactivation of 'addiction-inducing features' — infinite scroll, autoplay, notification systems — on all accounts held by minors.
In the United States, the Senate passed the Kids Online Safety Act (KOSA) unanimously, and it is advancing through the House Energy and Commerce Committee. France, the United Kingdom, Malaysia, Germany, Italy, Greece, and Spain are all examining similar restrictions for those under 16. Brazil's approach stands out as the most progressive model, regulating features rather than age. With multiple countries simultaneously experimenting with different regulatory frameworks, comparative data on which approach actually works should accumulate within one to two years. This will serve as foundational evidence for establishing future global regulatory standards, demonstrating that what we are witnessing is not a handful of isolated experiments but the formation of a global consensus.
The 2026 World Happiness Report Reveals a Youth Well-Being Crisis
According to the 2026 World Happiness Report, published by the UN Sustainable Development Solutions Network (SDSN) and the Oxford Wellbeing Research Centre, adolescents spend an average of 2.5 hours per day on social media, and passive scrolling through algorithmically curated content — particularly influencer-centric material — showed a strong correlation with declining life satisfaction.
The decline is most pronounced in English-speaking countries (the United States, Canada, Australia, and New Zealand), where life evaluations for those under 25 have dropped by approximately one full point on a 0-to-10 scale over the past decade. Western Europe shows the same downward trajectory. In contrast, across the eight global regions that account for roughly 90% of the world's population, well-being among the youngest cohort has actually risen compared to 2006-2010 levels, suggesting that the impact of social media varies significantly by cultural and economic context.
The effects are particularly harmful to adolescent girls. The report found that the life satisfaction boost from moving school belonging from low to high levels was four times greater in the UK and Ireland — and six times greater in a 47-country global sample — than the benefit of reducing social media use from high to low levels. The paradox that children in the world's wealthiest and most educated nations are among the unhappiest suggests that social media is creating a mental health crisis that material prosperity alone cannot resolve. The timing of this report's release, coinciding with the social media guilty verdict, is amplifying political will for regulation like never before. Parental anger is translating into voter anger, and that is tipping the political calculus firmly toward 'pro-regulation.' Correlation does not prove causation, but in the face of data at this scale, arguing that 'social media is innocent' is becoming increasingly untenable.
2,000+ Follow-On Lawsuits and the Pressure to Transform Big Tech's Business Model
As of March 2026, the total number of youth social media addiction lawsuits consolidated in the multidistrict litigation (MDL 3047, under Judge Yvonne Gonzalez Rogers in the Northern District of California) has reached 2,407 cases. This MDL names Meta (Facebook and Instagram), Alphabet (YouTube), Snap (Snapchat), and ByteDance (TikTok) as defendants, featuring an unusual structure that combines government plaintiffs — 33 state attorneys general, local governments, and school districts — with individual victim plaintiffs.
The catalyst was the October 2023 class action filed by 33 states against Meta in federal court. Just as the 1998 Tobacco Master Settlement Agreement saw 46 states participate in a $206 billion deal, the social media litigation wave is highly likely to evolve from individual cases into a collective settlement. This lawsuit wave is pushing for more than monetary damages — it is pressuring a fundamental shift in platform business models. The current revenue formula of 'engagement time multiplied by ad impressions' is under pressure to transition toward 'safe engagement multiplied by premium subscriptions.'
Each lawsuit demands internal document disclosure through discovery, and each round of discovery feeds new internal evidence to the press, delivering cumulative blows to corporate reputation and stock price. Big Tech's lobbying muscle is formidable — in 2025 alone, major tech companies spent a record-breaking $100 million-plus on federal lobbying, with Meta alone spending $26.29 million to become the single largest lobbying spender across all industries. But the sheer physical scale of the litigation and the direction of public opinion have already grown too large to be contained by lobbying alone.
Positive & Negative Analysis
Positive Aspects
- Establishing the Legal Principle That Algorithms Can Be Defective Products
This verdict has opened an entirely new legal pathway that fundamentally circumvents the platform immunity logic of Section 230 of the Communications Decency Act. The shield of 'content is created by users, not by us' — which protected Big Tech for over two decades — has been pierced for the first time by the argument that 'the design itself is defective.' This precedent hands future plaintiffs in thousands of lawsuits a powerful weapon, representing a seismic legal crack with the potential to reshape the entire landscape of tech regulation. Algorithm designers can now, for the first time, face legal accountability for their design decisions — a precedent that carries jurisprudential significance as a digital extension of product liability law. If this ruling survives the appellate process, it will herald an era in which software design must meet the same safety standards applied to physical products.
- World Happiness Report Data Supercharges the Political Momentum for Regulation
The shocking finding that youth happiness in Western nations ranks between 122nd and 133rd out of 136 countries has landed at a timing that dovetails perfectly with the verdict. The anxieties that parents in developed countries have long felt are now validated by hard data, and political will for social media regulation has never been stronger. Parental anger is converting directly into voter pressure, tipping the political calculus toward pro-regulation positions. With the 2026 U.S. midterm elections on the horizon, 'children's online safety' is one of those rare issues that commands bipartisan support. The gender disparity data — showing that adolescent girls are disproportionately harmed — is galvanizing strong reactions from female voters, making this an electoral-strategic factor that no politician can afford to ignore.
- Simultaneous Global Experiments Across Multiple Regulatory Models
Australia's age-based ban, Brazil's feature-based ban, and the EU's fine-based deterrence approach mean that the world is simultaneously running multiple regulatory experiments. This is a historically rare opportunity: within one to two years, the comparative data needed to evaluate each approach's effectiveness will have accumulated. Policymakers will gain evidence-based answers to the question of 'which regulation actually works.' The ability to discover the optimal regulatory mix while avoiding the risk of going all-in on a single methodology is an extraordinarily valuable situation from a policy-science perspective. If Brazil's feature-based approach succeeds in particular, it could offer a fundamental solution to the problem of age-restriction circumvention.
- Creating Incentives for Voluntary Platform-Led Child Protection Measures
Now that legal risk has materialized into reality, companies are running the numbers and discovering that 'proactive self-regulation' is cheaper than 'forced change by court order.' Meta has already been tightening restrictions on teen accounts since 2024, and as litigation pressure mounts, the pace of change can only accelerate. By 2027, both Instagram and YouTube are likely to adopt 'well-being-oriented' recommendation algorithm modes as default settings for users under 18. This is not altruism — it is legal risk management: companies need to be able to say in court, 'We have already made improvements.' In TikTok's case, with its very survival in the U.S. market hanging in the balance, it may actually show the most aggressive changes on child protection.
- Triggering an Ethical Transformation Across the Entire UX Design Industry
Once legal liability for addictive algorithmic design is established, the ripple effects extend far beyond social media. Gacha systems in mobile games, infinite autoplay on streaming services, countdown timers and social proof displays in e-commerce — every dark pattern that exploits users' psychological vulnerabilities will carry legal risk. The catalyst has been set for a paradigm shift in design philosophy from 'maximize user engagement' to 'maximize user well-being.' This will trigger sweeping changes that span from UX design curricula to how companies set their KPIs. For the first time, a legal check has been placed on the darker side of 'persuasive design' — the techniques pioneered by alumni of Stanford's Persuasive Technology Lab.
Concerns
- The Risk of Age Restrictions Becoming Performative Politics
Age restrictions are the kind of policy that makes it easy for politicians to tell voters 'we did something,' but their actual enforcement effectiveness is questionable. It is technically trivial for children to circumvent these restrictions through VPNs, borrowing parents' accounts, or simply lying about their age. And turning 16 does not suddenly confer immunity to algorithmic influence. If the success metric becomes 'laws passed' rather than 'measurable improvement in children's well-being,' these measures amount to a stopgap that provides political absolution while leaving structural problems intact. In Australia, although approximately 4.7 million underage accounts were deleted after the ban took effect, actual usage reduction fell short of expectations, and 27% of parents reported that their children migrated to less-regulated alternative platforms — confirming that this concern is already becoming reality.
- The Risk of Lawsuits Becoming a Legal Industry Gold Rush Rather Than Justice
The history of tobacco litigation is instructive. A significant portion of the $206 billion Master Settlement Agreement ended up as attorney fees. The amount that actually reached victim families was less than expected, and a substantial share of the settlement funds was absorbed into state general revenues, spent on purposes entirely unrelated to tobacco harm prevention. The 2,000-plus social media lawsuits risk repeating the same structure. It is worth asking with clear eyes whether state attorneys general are pursuing these cases purely for the sake of children or for political career advancement and state budget supplementation. When the lawsuit itself becomes the objective, the focus can shift to monetary settlements at the expense of meaningful institutional change — and the children end up as an afterthought.
- Regulatory Evasion Pushing Children to More Dangerous Platforms
When children are banned from social media, they may migrate to encrypted messengers like Telegram or lesser-known fringe platforms, exposing themselves to more dangerous content in environments where adult supervision is even harder to maintain. Balloon effects of this kind have already been reported in Australia. On mainstream platforms, there are at least reporting systems, content filters, and account restrictions serving as safety nets — but on unofficial channels, even those minimal protections are absent. The unintended consequences of regulation could make the situation worse than the problem it was designed to solve. This is a historically proven pattern, as the explosion of bootleg liquor industries during the Prohibition era amply demonstrates.
- Global Regulatory Inequality Creating a New Digital Divide
While wealthy nations strengthen regulations to protect their own children, Big Tech may pursue more aggressive growth in Southeast Asia, Africa, and Latin America — regions where regulatory infrastructure is underdeveloped. A 'regulatory inequality' that protects children in rich countries while abandoning those in poor countries could create a new digital divide. If global Big Tech's revenue structure migrates toward markets with looser regulation, then from a planetary perspective, the problem is not being solved — it is merely being relocated. The history of tobacco regulation provides a direct precedent: as developed nations tightened rules, tobacco companies accelerated their push into developing countries. There is a strong risk that social media will follow exactly the same pattern.
- The Risk of Overregulation and Privacy Invasion Under the Banner of Child Protection
'Protect the children' is such a powerful rallying cry that it can provide cover for freedom-of-expression violations, internet censorship, and privacy-invasive measures like large-scale identity verification systems. If age verification requires government-issued identification or biometric data, every internet user's online activity becomes linked to their real identity. In some countries, movements to expand governmental control over the internet under the guise of 'child protection' are already being detected. This could hand authoritarian governments a ready-made pretext for building surveillance infrastructure, and even in democratic societies, it raises serious concerns about civil liberties infringement. Striking a balance between child protection and digital privacy will be one of the most difficult policy dilemmas of this era.
Outlook
Let me start with what will happen in the next few months. The most immediate effect of this verdict is the acceleration of settlement negotiations. From Meta's and YouTube's perspective, fighting 2,000 individual lawsuits in court is simply impossible. Each case brings demands for internal document disclosure — discovery — and every round of discovery leaks another piece of the company's dirty laundry to the press, hammering the stock price. By the second half of 2026, Meta will attempt settlements in at least two or three major cases. My estimate is that individual settlement amounts will land in the $5 million to $20 million range per case, but the real question is whether settlement terms include 'mandatory algorithm changes.' If courts condition settlements not on monetary damages alone but on actual product design modifications, that becomes the true game changer.
Movement will also accelerate at the U.S. federal level. The Kids Online Safety Act (KOSA) and COPPA 2.0, currently pending in the Senate, are highly likely to gain momentum from this verdict. With the November 2026 midterm elections approaching, few issues command bipartisan support as easily as 'children's online safety.' Whether Republican or Democrat, bashing Big Tech draws applause from voters. My estimate puts the probability of at least one federal-level child online safety law passing by the end of 2026 at 65% or higher. More than 15 states have already passed or are actively pursuing their own child online protection laws, and without federal legislation, regulatory fragmentation will create confusion for both companies and consumers — a dynamic that keeps building pressure for federal action.
Looking at the medium term — six months to two years out — this is where the structural changes become far more profound. First, class action lawsuits led by state attorneys general will kick into high gear. Attorneys general in major states including California, New York, and Texas are currently preparing lawsuits against Meta and TikTok. This is a fundamentally different beast from individual lawsuits. When state governments become plaintiffs, the scope of discovery expands dramatically, and settlement figures grow exponentially. The decisive turning point in tobacco litigation came in 1994, when Mississippi Attorney General Mike Moore launched the first state-level lawsuit. Just four years later, the Master Settlement Agreement was reached. Applying a similar timeline, large-scale settlement negotiations with Big Tech could begin around 2028.
Simultaneously, enforcement of the European Union's Digital Services Act (DSA) child protection provisions will enter full swing. In October 2025, the European Commission issued preliminary findings that TikTok and Meta (Facebook and Instagram) had violated DSA transparency and user protection obligations, with additional investigations into minors' addiction risk and exposure to inappropriate content still ongoing. Violations can carry fines of up to 6% of global revenue. If Meta's 2025 global revenue is approximately $170 billion, the theoretical maximum fine exceeds $10 billion. The actual amount imposed is unlikely to reach that ceiling, but the leverage it provides as a negotiating card is enormous. I expect the EU to impose at least one child-protection-related fine exceeding one billion euros on at least one platform by mid-2027. There is precedent here: when GDPR was first enforced, initial fines were largely symbolic, but by 2023, a 1.2 billion euro fine against Meta demonstrated that enforcement intensity can escalate sharply.
Voluntary changes by platform companies will also accelerate in the medium term. Meta has been tightening restrictions on teen accounts since 2024, and as litigation pressure mounts, the pace can only increase. To be specific about my expectations: by 2027, both Instagram and YouTube will make 'well-being-oriented' modes — which shift recommendation algorithms away from engagement maximization — the default setting for users under 18. This is not goodwill; it is legal risk management. They need to be able to say in court, 'We have already made improvements.' TikTok, whose very survival in the U.S. market hangs in the balance, may actually demonstrate the most aggressive child protection changes. During this period, new social platforms that market 'child safety' as a core value proposition could emerge and begin siphoning market share from incumbents.
The long-term outlook — two to five years out — is where things get truly fascinating. I believe that by around 2030, the social media industry will operate under a fundamentally different business model than it does today. Referencing the tobacco industry's trajectory once more: after the Master Settlement Agreement, tobacco companies were forced to overhaul their marketing practices, product compositions, warning labels, and pricing structures. Social media is headed for a comparable structural transformation. The current revenue formula — 'engagement time multiplied by ad impressions' — will give way to 'safe engagement multiplied by premium subscriptions.' YouTube Premium and Meta's paid verification are already early experiments in subscription models, and litigation pressure will accelerate this transition. Over the long run, I expect the advertising share of social media revenue to decline from its current level of over 80% to below 60% by around 2030.
Consider the most optimistic scenario — the bull case. State-led class action lawsuits succeed, and a 'Social Media Master Settlement Agreement,' analogous to the tobacco deal, is reached around 2029 to 2030. The settlement totals between $50 billion and $100 billion, with conditions including mandatory algorithmic transparency, deactivation of recommendation algorithms for minors' accounts, and the establishment of an independent oversight body. Simultaneously, global regulatory harmonization takes shape, with 30 or more major countries adopting comparable child online safety standards. Under this scenario, youth social media usage declines by 40% from current levels by 2031, and youth mental health indicators begin showing meaningful improvement. I place the probability of this outcome at 25 to 30%.
The base case plays out like this: lawsuits drag on, settlements proceed at a glacial pace, and platforms hold the line with minimal self-regulation. A federal-level child protection law passes in the United States, but enforcement is lax. Regulations in Europe and Australia produce results, but regulatory gaps persist across Southeast Asia, Africa, and Latin America. By 2030, social media usage patterns among youth in developed countries show modest improvement, but across the global youth population as a whole, not much changes. Total settlement amounts settle in the $20 billion to $30 billion range, and the resolution is more monetary than structural. In terms of probability, this scenario is the most likely at around 50%.
The most pessimistic scenario — the bear case — is a legal war that grinds to a stalemate. Big Tech succeeds in overturning lower court rulings on appeal, Section 230 reform is torpedoed by lobbying, and regulatory fatigue drains the political momentum. Platforms apply cosmetic changes only, while the algorithmic core remains untouched. Under this scenario, the social media business model remains fundamentally unchanged by 2030, and the youth mental health crisis persists. I put this probability at 20 to 25%, and caution against underestimating Big Tech's lobbying power. In 2025, major tech companies' Washington lobbying expenditures surpassed $100 million for the first time in history, with Meta alone spending $26.29 million to become the single largest lobbying spender across every industry.
The ripple effects on adjacent industries deserve attention as well. This litigation wave will not end as a social media problem alone. Once legal liability for 'addictive design' is established, the logic extends to mobile gaming gacha systems, streaming service autoplay loops, and even the dark patterns of e-commerce — countdown timers, social proof badges, scarcity signals — every design choice that exploits users' psychological vulnerabilities acquires legal risk. This amounts to a paradigm shift for the entire UX design industry. Going forward, 'maximize user well-being' rather than 'maximize user engagement' could become the governing design principle. Optimistically, this is a positive development; realistically, it could also slow the pace of innovation. The insurance industry should pay attention too: 'algorithmic liability insurance' is likely to emerge as a new product category, and this will reshape the cost structures of tech companies.
Finally, the cascading effects on education cannot be overlooked. As social media regulation tightens, digital literacy education in schools will undergo a fundamental rethinking. Currently, most digital literacy programs focus on 'how to use technology safely,' but after regulation takes hold, the emphasis will shift to critical media education that teaches 'how algorithms manipulate you.' Finland has already integrated media literacy into its national curriculum; I expect that by 2028, more than half of OECD member nations will have embedded algorithmic literacy as a mandatory component of their educational frameworks. In the short term, this means higher education costs, but in the long run, it is an investment in raising a generation equipped with 'digital immunity.' There is also a labor market dimension that few are discussing yet. As regulatory pressure forces platforms to redesign their recommendation engines, the demand for a new category of professionals — 'ethical algorithm auditors' — will surge. These are not traditional software engineers or data scientists but hybrid experts who combine technical knowledge of machine learning systems with backgrounds in psychology, child development, and regulatory compliance. Universities that move quickly to establish these interdisciplinary programs will produce graduates who command premium salaries, while companies that fail to hire them will face mounting legal exposure. By 2029, I expect every major tech company to have a dedicated 'algorithmic well-being' division with headcounts exceeding 200 employees each, a workforce that did not exist in any meaningful form before this verdict.
The venture capital landscape is already shifting in response. Investors who once poured money into startups promising 'maximum engagement metrics' are recalibrating their thesis. The next generation of social platforms will be funded on the premise of 'engagement quality over quantity,' and founders who can demonstrate measurably healthier user outcomes will have a fundraising advantage. Several prominent VC firms in Silicon Valley have quietly begun adding 'regulatory risk assessment' as a standard component of their due diligence process for any consumer social investment. The era of 'move fast and break things' is not just culturally over — it is becoming financially untenable.
The implications for the advertising industry itself are worth examining at length. If platforms are forced to reduce the intensity of engagement-maximizing algorithms, the total volume of ad impressions will decline. But this does not necessarily mean advertising revenue collapses. In fact, the opposite may occur. Advertisers have long complained about the 'brand safety' problem — their ads appearing next to toxic or controversial content that damages their reputation. A regulatory environment that forces cleaner, healthier feeds could actually increase the per-impression value of ads by providing a more brand-safe environment. Premium CPMs on well-regulated platforms could partially or fully offset the decline in total impression volume. The net effect on advertising revenue is therefore not the catastrophe that industry doomsayers predict, but a restructuring that rewards quality over volume.
Ultimately, every thread of this transformation points to the same conclusion: technology must exist to serve humanity, not the other way around. That is a principle so obvious it should never have needed restating — yet here we are, fighting to reclaim it. The $6 million verdict in a Los Angeles courtroom is not the end of anything. It is the beginning of a renegotiation between society and the platforms that have, for two decades, operated on the assumption that they owed nothing to the children whose attention they monetized. Whether that renegotiation produces genuine structural reform or merely a more sophisticated version of the status quo will depend on whether the political will, the legal momentum, and the parental anger that exist right now can be sustained long enough to outlast the lobbying budgets arrayed against them. History suggests that once the dam cracks, the water finds a way through. The dam has cracked.
One final thought that bears emphasis: this is not a story about technology failing us. It is a story about business incentives distorting what technology could be. Social media at its best — connecting isolated communities, amplifying marginalized voices, enabling creative expression — remains one of the most powerful tools humans have ever built. The task ahead is not to destroy that tool but to rewire the incentive structures so that platforms profit from enriching lives rather than from exploiting attention. If this verdict and the regulatory wave that follows can accomplish even a fraction of that rewiring, the children who come after this generation may inherit a digital world that was designed with their well-being in mind rather than their attention as a commodity to be strip-mined. That is a future worth fighting for.
Sources / References
- World Happiness Report 2026 — UN Sustainable Development Solutions Network (SDSN)
- Social media is harming adolescents at a scale large enough to cause changes at the population level — World Happiness Report 2026
- Tobacco Master Settlement Agreement — Public Health Law Center
- Jury finds Meta and Google negligent in social media harms trial — NPR
- Jury finds Meta, YouTube liable for social media addiction: What we know — Al Jazeera
- Social Media Addiction Lawsuits Surge After First Verdict — Reuters
- Social Media Addiction Lawsuit — MDL 3047 Update (March 2026) — Lawsuit Information Center
- Online Safety Amendment (Social Media Minimum Age) Act — Australia eSafety Commissioner
- Potential effects of the social media age ban in Australia for children younger than 16 years — The Lancet Digital Health
- Indonesia: Regulation Introduces Age Restrictions for Social Media Platforms — Library of Congress
- Brazil just banned infinite scroll and autoplay for kids online — ECA Digital (Law 15,211/2025) — Brightcast News
- Digital Services Act — Protection of Minors — European Commission
- Commission preliminarily finds TikTok and Meta in breach of DSA transparency obligations — European Commission
- S.1748 — Kids Online Safety Act (KOSA), 119th Congress — Congress.gov
- Social Media and Youth Mental Health: The U.S. Surgeon General's Advisory — U.S. Department of Health and Human Services