About Us

Our Purpose

Pluro Labs is a nonpartisan, independent research and policy lab dedicated to safeguarding democracy and public welfare from Big Tech abuses.

Leading social media platforms - whose defective product design optimizes for 'engagement at all costs' - drive harms to democratic process, child safety, public health, and consumers alike. Yet democratic societies - faced with opaque platforms and historically powerful tech firms - have not been able to demonstrate how this repeating cycle of harms reflects clear commercial liability. This is the critical step toward meaningful accountability and reform.

Bridging the fields of applied AI, digital investigations, and policy innovation, we empower advocates, litigators, policymakers, and regulators with groundbreaking digital evidence, novel research methods, and creative policy approaches to pierce the broad legal immunities enjoyed by firms operating the most damaging products to public welfare in the world today.  

We focus on demonstrating how platforms don't merely host harmful content - they commercially exploit and incentivize it. This is embodied by our soon-to-be released Platform Harm for Profit Framework (PHP)- a free toolkit by which advocates, researchers, and journalists can document how platforms profit from harm, using publicly available data.

Our flagship U.S. Elections Defense Initiative reflects the power of this approach. First, we documented a social media platform's monetization of real-world election interference and harassment incidents taking place nationwide. Now, this research is enabling legal, policy, and field resilience actions to protect US election safety moving forward.

Our Approach

01
Research and Analysis

We apply rigorous digital investigative techniques to generate evidence and analysis tailored for legal, regulatory, and policy actions — not just reports. Leveraging capabilities in applied AI and expertise in tech platform economics, we document both harms and the patterns of commercial gain that underpin them.

02
Policy and Legal Innovation

In collaboration with advocates, policy, and legal experts, we apply our evidence and research to hold technology firms accountable. For example, by demonstrating how platforms don't just publish, but commercially exploit and reward harms to election workers, we advance legal and regulatory actions that can win lasting protections for democracy and public welfare.

03
Field Building and Empowerment

We share new research methods, tools, and insights, equipping advocates to expose and communicate the commercial incentives that underlie public harms. By sharing resources like our Platform Harm for Profit framework, we empower a field that can win in court and legislatures, rather than one rendered permanently reactive in the face of tech-driven harms.

Staff and Alumni

Sofia Schnurer | Data Researcher + Analyst

Sofia is an open-source investigator and data analyst with experience across human rights, democracy protection, and digital investigations for legal accountability. She is skilled in mass social media discovery, identifying deepfakes, and advanced visual analysis techniques.

Will Byrne | Founder and Executive Director

Will is a social entrepreneur, applied technology executive, and democracy advocate. He served as founding CEO of Groundswell, a pioneering nonprofit that makes clean energy accessible to disadvantaged communities in the US. He then lead product strategy an AI and emerging tech software firm, before developing tech and innovation teams at CARE and Human Rights First.

Will has written and spoken on AI, tech, and the public interest at FastCompany, Stanford University, and UC Berkeley, among others. He has been honored for his impact as an Ashoka Fellow, World Economic Forum Global Shaper, Forbes 30 under 30 Entrepreneur, White House Champion of Change, and Stanford d.School Fellow.

Mackenzie Berwick | Partnerships + Program Lead

Mackenzie is an operations and program management specialist with background in evidentiary research. She supports Pluro Labs' engagement with policy, legal and advocacy stakeholders.

Mackenzie is an experienced open-source investigator in the human rights and democracy field. Over five years, she has built expertise in the digital verification and documentation of gross human rights abuses around the world. She has conducted investigations at Amnesty International USA and Amnesty International's research arm, the International Secretariat. Mackenzie is passionate about using technology and data to drive accountability and inform ethical AI governance.

Janine Graham | Research Lead

Janine Graham is an investigative researcher specializing in open-source intelligence (OSINT) techniques to investigate subjects of public interest. Her work has covered areas ranging from war crimes and illicit supply chains to tracking persons of interest for organizations such as UC Berkeley's Human Rights Center, The Associated Press and The Wall Street Journal. As a journalist, she previously worked for CNBC and CNN International.

Advisors

Alexa Koenig, PhD, JD
Co-Executive Director, UC Berkeley Human Rights Center; Founder, Digital Investigations Lab. Global leader in technology and human rights, architect of Berkeley Protocol on open-source investigation.
Dhruv Gulati
Co-Founder and CEO, Factmata. Mis-information technologist, product leader, expert in fraud, digital information and media ecosystems, and social networks.
Hany Farid, PhD
Professor and Dean Alumnus, UC Berkeley School of Information. Technologist, researcher, trailblazer in image analysis and computer vision technology.
Rumman Chowdury, PhD
Former Director; Machine Learning Ethics, Transparency, and Accountability; Twitter. Expert, pioneer, advocate in ethics in AI and machine learning.
Sue Hendrickson, JD
Director Emeritus, Harvard Berkman Klein Center for Internet and Society. Expert in technology and media policy and law.