Pluro Labs is a nonpartisan, independent research and policy lab dedicated to safeguarding democracy and public welfare from Big Tech abuses.
Leading social media platforms - whose defective product design optimizes for 'engagement at all costs' - drive harms to democratic process, child safety, public health, and consumers alike. Yet democratic societies - faced with opaque platforms and historically powerful tech firms - have not been able to demonstrate how this repeating cycle of harms reflects clear commercial liability. This is the critical step toward meaningful accountability and reform.
Bridging the fields of applied AI, digital investigations, and policy innovation, we empower advocates, litigators, policymakers, and regulators with groundbreaking digital evidence, novel research methods, and creative policy approaches to pierce the broad legal immunities enjoyed by firms operating the most damaging products to public welfare in the world today.
We focus on demonstrating how platforms don't merely host harmful content - they commercially exploit and incentivize it. This is embodied by our soon-to-be released Platform Harm for Profit Framework (PHP)- a free toolkit by which advocates, researchers, and journalists can document how platforms profit from harm, using publicly available data.
Our flagship U.S. Elections Defense Initiative reflects the power of this approach. First, we documented a social media platform's monetization of real-world election interference and harassment incidents taking place nationwide. Now, this research is enabling legal, policy, and field resilience actions to protect US election safety moving forward.
Our Approach
We apply rigorous digital investigative techniques to generate evidence and analysis tailored for legal, regulatory, and policy actions — not just reports. Leveraging capabilities in applied AI and expertise in tech platform economics, we document both harms and the patterns of commercial gain that underpin them.
In collaboration with advocates, policy, and legal experts, we apply our evidence and research to hold technology firms accountable. For example, by demonstrating how platforms don't just publish, but commercially exploit and reward harms to election workers, we advance legal and regulatory actions that can win lasting protections for democracy and public welfare.
We share new research methods, tools, and insights, equipping advocates to expose and communicate the commercial incentives that underlie public harms. By sharing resources like our Platform Harm for Profit framework, we empower a field that can win in court and legislatures, rather than one rendered permanently reactive in the face of tech-driven harms.
Sofia is an open-source investigator and data analyst with experience across human rights, democracy protection, and digital investigations for legal accountability. She is skilled in mass social media discovery, identifying deepfakes, and advanced visual analysis techniques.
Will is a social entrepreneur, applied technology executive, and democracy advocate. He served as founding CEO of Groundswell, a pioneering nonprofit that makes clean energy accessible to disadvantaged communities in the US. He then lead product strategy an AI and emerging tech software firm, before developing tech and innovation teams at CARE and Human Rights First.
Will has written and spoken on AI, tech, and the public interest at FastCompany, Stanford University, and UC Berkeley, among others. He has been honored for his impact as an Ashoka Fellow, World Economic Forum Global Shaper, Forbes 30 under 30 Entrepreneur, White House Champion of Change, and Stanford d.School Fellow.
Mackenzie is an operations and program management specialist with background in evidentiary research. She supports Pluro Labs' engagement with policy, legal and advocacy stakeholders.
Mackenzie is an experienced open-source investigator in the human rights and democracy field. Over five years, she has built expertise in the digital verification and documentation of gross human rights abuses around the world. She has conducted investigations at Amnesty International USA and Amnesty International's research arm, the International Secretariat. Mackenzie is passionate about using technology and data to drive accountability and inform ethical AI governance.
Janine Graham is an investigative researcher specializing in open-source intelligence (OSINT) techniques to investigate subjects of public interest. Her work has covered areas ranging from war crimes and illicit supply chains to tracking persons of interest for organizations such as UC Berkeley's Human Rights Center, The Associated Press and The Wall Street Journal. As a journalist, she previously worked for CNBC and CNN International.