Why It Matters
For decades, platform accountability debates have focused on content moderation and algorithmic amplification. But today’s platforms don’t just host content — they actively pay and promote content creators that drive engagement, even when the result is harm.
From ad-revenue sharing to subscription tools and bonus programs, platform's monetization systems create predictable, systemic incentives for harmful content — including those exploiting children, undermining public health, or spreading fraud.
By documenting and interpreting these commercial incentive structures, the Framework enables:
- Litigators to overcome outdated Section 230 defenses and win major new cases;
- Policymakers to address core economic systems and faulty product design, not circular speech debates;
- Funders, journalists, and researchers to understand and expose how profit-driven product features drive harm to public welfare.
What It Includes
- A plain-language explanation of how platform monetization systems work and why their reform is critical to stopping harm to kids, consumers, and industry alike.
- A high-level set of categories for monetization features — such as advertising revenue sharing, subscription products, direct tipping, or affiliate programs.
- A clear reframing of legal and policy discussions to focus on how product design built around monetization of 'anything that wins attention' is the root cause of harm.
- Context from existing case law, investigations, and public examples that show monetization of harm in action.
Who It’s For
- Litigators and law enforcement pursuing justice for platform-enabled harms.
- Policy organizations and regulators confronting platform abuses and designing remedies.
- Journalists and civil society actors seeking stronger narratives on platform complicity.