Pluro Labs is a nonpartisan, independent research and policy lab dedicated to safeguarding democracy and public welfare from Big Tech abuses.
Leading social media platforms drive harms to child safety, public health, democratic process, and consumers alike. These damages will worsen with generative AI. Yet democratic societies - faced with opaque tech products and historically powerful industry lobbies - have not been able to prove how this cycle of harm stems from commercial choices that place profit over predictable harm. We believe this is the critical step toward meaningful accountability and reform - to protect kids, democracy, and even a healthy economy.
Bridging the fields of applied AI, digital investigations, and policy innovation, we empower advocates, litigators, policymakers, and regulators with groundbreaking digital evidence, novel research methods, and creative policy approaches to pierce the legal immunities enjoyed by firms that operate the most damaging products to public welfare today.
We focus on showing how platforms don't merely host damaging content - they commercially exploit - and even reward and incentivize - harms to public welfare.
Our flagship U.S. Elections Defense Initiative reflects the power of this approach. First, we documented a social media platform's monetization of content depicting real-world, lawbreaking election interference and harassment incidents across the US. This research is now driving legal, policy, and field resilience actions to protect US election safety.
We're now applying this model to expose how platform monetization drives the exploitation of children, through an innovative pilot partnership with Heat Initiative.
In response to requests from the field, we aim to release the Platform Harm for Profit Framework (PHP) - an open-source toolkit by which advocates, researchers, and policymakers can document how platforms commercially exploit and reward harm, all through publicly available data.
Our Approach
We apply rigorous digital investigative techniques to generate evidence and analysis tailored for legal, regulatory, and policy actions — not just reports. Leveraging capabilities in applied AI and expertise in tech platform economics, we document both harms and the patterns of commercial gain that underpin them.
In collaboration with advocates, policy, and legal experts, we apply our evidence and research to hold technology firms accountable. For example, by demonstrating how platforms don't just publish, but commercially exploit and reward harms to election workers, we enable legal and policy actions that can win lasting protections for public welfare.
We share new research methods, tools, and insights, equipping advocates to expose and communicate the commercial incentives that underlie public harms. By sharing resources like our Platform Harm for Profit framework, we empower a field that can win in court and legislatures, rather than one rendered permanently reactive in the face of tech-driven harms.
Sofia is an open-source investigator and data analyst with experience across human rights, democracy protection, and digital investigations for legal accountability. She is skilled in mass social media discovery, identifying deepfakes, and advanced visual analysis techniques.
Will is a social entrepreneur, applied technology executive, and democracy advocate. He served as founding CEO of Groundswell, a pioneering nonprofit that makes clean energy accessible to disadvantaged communities in the US. He then lead product strategy an AI and emerging tech software firm, before developing tech and innovation teams at CARE and Human Rights First.
Will has written and spoken on AI, tech, and the public interest at FastCompany, Stanford University, and UC Berkeley, among others. He has been honored for his impact as an Ashoka Fellow, World Economic Forum Global Shaper, Forbes 30 under 30 Entrepreneur, White House Champion of Change, and Stanford d.School Fellow.
Mackenzie is an operations and program management specialist with background in evidentiary research. She supports Pluro Labs' engagement with policy, legal and advocacy stakeholders.
Mackenzie is an experienced open-source investigator in the human rights and democracy field. Over five years, she has built expertise in the digital verification and documentation of gross human rights abuses around the world. She has conducted investigations at Amnesty International USA and Amnesty International's research arm, the International Secretariat. Mackenzie is passionate about using technology and data to drive accountability and inform ethical AI governance.
Janine Graham is an investigative researcher specializing in open-source intelligence (OSINT) techniques to investigate subjects of public interest. Her work has covered areas ranging from war crimes and illicit supply chains to tracking persons of interest for organizations such as UC Berkeley's Human Rights Center, The Associated Press and The Wall Street Journal. As a journalist, she previously worked for CNBC and CNN International.